CSC 111 - Introduction To Computer Science - Corrected Version
CSC 111 - Introduction To Computer Science - Corrected Version
Requirements
Registered students for this course will be provided with login details at the point of registration.
Download and read through the unit of instruction stated for each week before scheduled time of
interaction with the course tutor/ facilitator. You can also download and watch the relevant video
and listen to the podcast so that you will understand and follow the course facilitator
At scheduled time, you are expected to login to the classroom for interaction
Self-assessment component of the courseware are available as exercises to help you learn and
master the content you have gone through.
You are to answer the TMA for each unit and submit for your assessment
Beyond the regular classroom attendance weight will be given to assignments and final
examination as follows:
Total 100%
Unit 1: Definition, History of Computers and its classification
1.0 Introduction.
2.0 Learning Outcomes
3.0 Main Content
3.1 Basic Computing Concepts
3.2 The History of Computer
3.2.1 Abacus
3.2.2 Blaise Pascal
3.2.3 Joseph Marie Jacquard
3.2.4 Charles Babbage
3.2.5 Augusta Ada Byron
3.2.6 Herman Hollerith
3.2.7 John Von Neumann
3.2.8 J. V. Atanasoff
3.2.9 Howard Aiken
3.2.10 Grace Hopper
3.2.11 Bill Gates
3.2.12 Philip Emeagwali
3.3 Computer Classification
3.3.1 Classification by Generation
3.3.2 Classification by Nature of Data
3.3.3 Classification by Size
3.3.4 Classification by Purpose
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading
1.0 INTRODUCTION
This unit covers the basics of the concept a computer, the historic development over the years as
well as its classification. Computers are classified by size, Nature of Data it can process,
Generation, and Purpose.
2.0 Learning Outcomes
At the end of this unit, you should be able to:
I. Define a Computer
II. Describe the development of Computer over the years
III. State the nature of data computer can process
IV. Compare generations of computers
V. Differentiate computer by size or by purpose
3.0 Main Content
3.1 Basic Computing Concepts
A computer can be described as an electronic device that accepts data as input, processes the
data based on a set of predefined instructions called program to produce the result of these
operations as output called information. From this description, a computer can be referred to as
an Input-Process-Output (IPO) system, pictorially represented in the Figure 1:
INPUT PROCESS OUTPUT
Figure 1.4: Jacquard's Loom showing the threads and the punched cards
Figure 1.5:By selecting particular cards for Jacquard's loom you defined the woven pattern
[photo © 2002 IEEE]
3.2.4 Charles Babbage
Charles Babbage was born in Totnes, Devonshire on December 26, 1792 and died in London on
October 18, 1871. He was educated at Cambridge University where he studied Mathematics. In
1828, he was appointed Lucasian Professor at Cambridge. Charles Babbage started work on his
analytic engine when he was a student. His objective was to build a program-controlled,
mechanical, digital computer incorporating a complete arithmetic unit, store, punched card input
and a printing mechanism.
The program was to be provided by the set of Jacquard cards. However, Babbage was unable to
complete the implementation of his machine because the technology available at his time was not
adequate to see him through. Moreover, he did not plan to use electricity in his design. It is
noteworthy that Babbage’s design features are very close to the design of the modern computer.
Babbage invented the modern postal system, cowcatchers on trains, and the ophthalmoscope,
which is still used today to treat the eye.
Figure1.6: A small section of the type of mechanism employed in Babbage's Difference
Engine [photo © 2002 IEEE]
3.2.5 Augusta Ada Byron
Ada Byron was the daughter of the famous poet Lord Byron and a friend of Charles Babbage,
(Ada later become the Countess Lady Lovelace by marriage). Though she was only 19, she was
fascinated by Babbage's ideas and through letters and meetings with Babbage she learned enough
about the design of the Analytic Engine to begin fashioning programs for the still unbuilt
machine. While Babbage refused to publish his knowledge for another 30 years, Ada wrote a
series of "Notes" wherein she detailed sequences of instructions she had prepared for the
Analytic Engine. The Analytic Engine remained unbuilt but Ada earned her spot in history as the
first computer programmer. Ada invented the subroutine and was the first to recognize the
importance of looping.
3.2.6 Herman Hollerith
Hollerith was born at Buffalo, New York in 1860 and died at Washington in 1929. Hollerith
founded a company which merged with two other companies to form the Computing Tabulating
Recording Company which in 1924 changed its name to International Business Machine (IBM)
Corporation, a leading company in the manufacturing and sales of computer today.
Hollerith, while working at the Census Department in the United States of America became
convinced that a machine based on cards can assist in the purely mechanical work of tabulating
population and similar statistics was feasible. He left the Census in 1882 to start work on the
Punch Card Machine which is also called Hollerith desks.
This machine system consisted of a punch, a tabulator with a large number of clock-like counters
and a simple electrically activated sorting box for classifying data in accordance with values
punched on the card. The principle he used was simply to represent logical and numerical data in
the form of holes on cards.
His system was installed in 1889 in the United States Army to handle Army Medical statistics.
He was asked to install his machine to process the 1890 Census in USA. This he did and in two
years, the processing of the census data was completed which used to take ten years. Hollerith’s
machine was used in other countries such as Austria, Canada, Italy, Norway and Russia.
Figure 1.7: Hollerith desks [photo courtesy The Computer Museum
Figure 1.10: One of the four paper tape readers on the Harvard Mark I
3.2.10 Grace Hopper
Grace Hopper was one of the primary programmers for the Mark I. Hopper found the first
computer "bug": a dead moth that had gotten into the Mark I and whose wings were blocking the
reading of the holes in the paper tape. The word "bug" had been used to describe a defect since at
least 1889 but Hopper is credited with coining the word "debugging" to describe the work to
eliminate program faults.
V. Fifth Generation Computer: The development of the fifth generation computers started
in the 1980’s and classical researches are still going on in this generation of computers. It
is regarded as present and future of computers. Although, some of these machines are
already in use, but a lot of work still need to be done to actualize the reasons for this
generation of computer. The objective of this computer system is to build a computer
system that mimic the intelligence of human expert in a knowledge domain such as
medicine, law, education, criminal investigation, etc. This objective is achieved through
the implementation of Artificial Intelligence and Expert Systems development.
4.0 Summary
In this unit, you have learnt that:
A computer is an electronic device that accepts data as input, process the data through
predefined instructions to produce information.
The historical development of computer was from the Abacus to the modern electronic
computer.
Computers can be classified based on its generation, nature of data it can process, its size
as well as the purpose for which they are built.
5.0 Self-Assessment
A. Discuss the contribution(s) of any three (3) persons to the history of computers
B. With appropriate examples, differentiate between Analogue and Digital Computers
C. Describe classification of Computers by Purpose
8.0 References
Egbewole, W. and Jimoh R. (Eds.). (2017) Digital Skill Acquisition. Ilorin, Nigeria: Unilorin
Dale, N. (2005), Computer Science Illuminated. London: Jones and Bartlett,.
French C. (2001). Introduction to Computer Science. London: Continuum.
Unit 2: Basic Components of Computer
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Basic Component of Computer
3.2 The Hardware
3.3 The Software
3.3.1 System Software
3.3.2 Application Software
3.4 The Humanware
3.5 Organizational Structure of a Typical Computer Installation
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading
The CPU consists of Main storage, ALU and Control Unit. The main storage is used for
storing data to be processed as well as the instructions for processing them. The ALU is the unit
for arithmetic and logical operations. The control unit ensures the smooth operation of the other
hardware units. It fetches instruction, decode (interprets) the instruction and issues commands to
the units responsible for executing the instructions.
The peripherals are in three categories: Input devices, Output devices and auxiliary storage
devices.
The input device is used for supplying data and instructions to the computer. Examples are
terminal Keyboard, Mouse, Joystick, Microphone, Scanner, Webcam, etc.
Output device is used for obtaining result (information) from the computer. Examples are
Printers, Video Display Unit (VDU), loudspeaker, projector etc,
Auxiliary Storage Devices are used for storing information on a long-term basis. Examples are
hard disk, flash disk, magnetic tape, memory card, solid state drive SDD etc.
Peripherals
Input
Unit Auxiliary Output
Storage Unit
Unit
Main Memory
Central
Processing Arithmetic
Unit and Logic
Unit
Control Unit
It is the software that enables the hardware to be put into effective use. There are two main
categories of software – System software and Application software.
System software are programs commonly written by computer manufacturers, which have direct
effect on the control, performance and ease of usage of the computer system. Examples are
Operating System, Language Translators, Utilities and Service Programs, and Database
Management Systems (DBMS).
Operating System is a collection of program modules which form an interface between the
computer hardware and the computer user. Its main function is to ensure a judicious and
efficient utilization of all the system resources (such as the processor, memory, peripherals and
other system data) as well as to provide programming convenience for the user. Examples are
Unix, Linux, Windows, Macintosh, and Disk Operating system.
Assemblers: This is a program that converts program written in assembly language (low
level language) into machine language equivalent.
Interpreter: This is a program that converts program written in high level language
(HLL) into its machine language (ML) equivalent one line at a time. Language like
BASIC is normally interpreted.
Compiler: This is a program that translates program written in high level language
(HLL) into machine language (ML) equivalent all at once. Compilers are normally called
by the names of the high-level language they translate. For instance, COBOL compiler,
FORTRAN compiler etc.
Preprocessor: This is a language translator that takes a program in one HLL and
produces equivalent program in another HLL. For example, there are many preprocessors
to map structured version of FORTRAN into conventional FORTRAN.
Database Management System (DBMS) is a complex program that is used for creation,
storage, retrieving, securing and maintenance of a database. A database can be described as an
organized collection of related data relevant to the operations of a particular organization. The
data are stored usually in a central location and can be accessed by different authorized users.
Linker is a program that takes several object files and libraries as input and produces one
executable object file.
Loader is a program that places an executable object file into memory and makes them ready for
execution. Both linker and loader are provided by the operating system.
These are programs which provide facilities for performing common computing tasks of a
routine nature. The following are some of the examples of commonly used utility programs:
Sort Utility: This is used for arranging records of a file in a specified sequence
(alphabetic, numerical or chronological) of a particular data item within the records. The
data item is referred to as the sort key.
Merge Utility: This is used to combine two or more already ordered files together to
produce a single file.
Copy Utility: This is used mainly for transferring data from a storage medium to the
other, for example from disk to tape.
Debugging Facilities: These are used for detecting and correcting errors in program.
Text Editors: These provide facilities for creation and amendment of program from the
terminal.
These are programs written by a user to solve individual application problem. They do not have
any effect on the efficiency of the computer system. An example is a program to calculate the
grade point average of all the 100L students. Application software can be divided into two
namely: Application Package and User’s Application Program. When application programs
are written in a very generalized and standardized nature such that they can be adopted by a
number of different organizations or persons to solve similar problem, they are called
Application Packages. There are a number of micro-computer based packages. These include
word processors (such as Ms-word, WordPerfect, WordStar); Database packages (such as
Oracle, Ms-access, Sybase, SQL Server, and Informix); Spreadsheet packages (such as Lotus 1-
2-3 and Ms-Excel); Graphic packages (such as CorelDraw, Fireworks, Photoshop etc), and
Statistical packages (such as SPSS). User’s Application Program is a program written by the
user to solve specific problem which is not generalized in nature. Examples include writing a
program to find the roots of quadratic equation, payroll application program, and program to
compute students’ results.
Although, the computer system is automatic in the sense that once initiated, it can, without
human intervention, continue to work on its own under the control of stored sequence of
instructions (program), however, it is not automatic in the sense that it has to be initiated by a
human being, and the instructions specifying the operations to be carried out on the input data
are given by human being. Therefore, apart from the hardware and software, the third element
that can be identified in a computer system is the human-ware. This term refers to the people
that work with the computer system. The components of the human-ware in a computer system
include the system analyst, the programmer, data entry operator, end users etc.
DPM
Data Processing Manager (DPM) supervises every other persons that work with him and is
answerable directly to the management of the organization in which he works.
A Programmer is the person that writes the sequence of instructions to be carried out by the
computer in order to accomplish a well-defined task. The instructions are given in computer
programming languages.
A data entry operator is the person that enters data into the system via keyboard or any input
device attached to a terminal. There are other ancillary staffs that perform other functions such as
controlling access to the computer room, controlling the flow of jobs in and out of the computer
room.
An end-user is one for whom a computerized system is being implemented. The end-user
interacts with the computerized system in their day-to-day operations of the organization. For
example a cashier in the bank who receives cash from customers or pays money to customers
interacts with the banking information system.
4.0 Summary
In this unit, you have learnt that:
Hardware are physical components of the system, Software are not physical and are also
referred to as programs, while humanware is the interaction between man and computer
The organization structure includes the data processing analyst, data entry operator, end
user and so on
5.0 Self-Assessment
1. Differentiate between hardware and software
2. Draw the diagrammatic representation of a computer installation
8.0 References
Egbewole, W. and Jimoh R. (Eds.). (2017) Digital Skill Acquisition. Ilorin, Nigeria: Unilorin
Unit 3 Characteristics and Advantages of Computers
1.0 Introduction.
2.0 Learning Outcomes
3.0 Main Content
3.1 Characteristics of Computers
3.2 Advantages of Computers
3.3 Disadvantages of Computer
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 References
8.0 Further Reading
1.0 Introduction
In this unit, you will learn the characteristics of a computer. These are the features or attributes
that the machine must possess for it to be regarded as a computer. The advantages of computers
are the reward the user or society at large get in the use of computers while the disadvantages are
the negative aspect of computer systems. Today, people use computers to make work easier and
faster, as well as to reduce the overall cost of completing a task.
The computer can use its stored information better than human. Information in the computer can
be sorted or organized into different categories. The information can also be searched faster.
Through different fields of computer technology for example data mining, computers understand
data better than humans and can use the information learnt to predict future occurrences to a
certain extent. For example, using the database, the computer can predict that at any time bread
is sold, eggs or butter or beverage is sold along. This will help the business owner to know that
s/he must not run out of eggs or butter or beverage when bread is available for sale
The computer helps to connect to the internet, where choices available are endless once
connected. Many advantages of the computer today are through connections to the internet.
The computer can help its user to improve several aspects. For example, use of the spell checker,
grammar corrector and so on. The user improves his/her abilities if s/he has a hard time learning.
Computers are very excellent tools that can be used to help the physically challenged. For
example, Speech recognition, where the user gets to type and the computer reads it out. This can
help a physically challenged user that cannot speak. Computers are also great tools for the blind
with special software that can be installed to read what is on the screen.
VI. Entertainment
The computer can keep its user entertained. Different songs, videos, as well as games can be
stored on the computer for use.
3.3 Disadvantages of Computers
Though the advantages of computer devices are numerous, there still exit some disadvantages.
Some of the disadvantages are:
A. Attack
Attack can be in form of virus and hacking. Virus usually spread malicious substances on the
system while hacking occurs through unauthorized access of the system.
Online cyber-crime means computer and network may have been used to commit crime.
Cyberstalking and Identity theft are at common cybercrimes of today.
Computers have made it possible for people to work alone on tasks on which they would need to
collaborate with others if they were to have to do them manually, such as inputting figures in
books of accounts. Therefore, the work environment in offices is such that all workers
concentrate on just looking at the computer with little interaction with colleagues.
7.0 References
Egbewole, W. and Jimoh R. (Eds.). (2017) Digital Skill Acquisition. Ilorin, Nigeria: Unilorin
8.0 Further Reading
http://oer.nios.ac.in/wiki/index.php/characteristics_of_computers
http://ecomputernotes.com/fundamental/introduction-to-computer/what-are-characteristic-
of-a-computer
http://www.byte-notes.com/advantages-and-disadvantages-computers
https://www.computerhope.com/issues/ch001798.htm
Module 2 Number Bases and Computer Arithmetic
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Number Base Arithmetic
3.2 Number Base Types
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading
1.0 Introduction
Number base is a way of representing numbers in computing. There are different types of
number base system which means numbers can be represented in any of the bases. The number
base we will cover includes decimal number system which uses digits 0-9, binary number system
with digits 0 and 1, Octal number system using digits 0 - 7 and hexadecimal system with digits
0-9 and alphabet A-F
The number system is a writing system for representing numbers of a given set, using
digits or other symbols in a consistent manner. The Numbers in the decimal system are
represented by means of positional notation. That is, the value or weight of a digit
depends on its location within the number. A number N, when expressed in positional
notation in the base b, is written as:
anan-1an-2 … a1a0.a-1a-2 … a-m
and defined as
anbn + an-1bn-1 + … + a1b1 + a0b0 + a-1b-1 + a-2b-2 + … + a-mb-m
---------------------- (2.1)
The “a” in the above equation are called digits and may have one of “b” possible values.
Positional notation employs the radix point to separate the integer and fractional parts of
the number. Decimal arithmetic uses the decimal point while binary arithmetic uses
binary point.
Subscript is used to indicate the base of a number when necessary, for example, 123 10,
4568, 10112.
Decimal number system is the system we use in our day to day activities. It has the base 10
because it uses 10 digits (0-9). The successive positions to the left of the point represent units,
tens, hundreds, thousands and so on. For example:
5372: 5 thousand, 3 is hundred, 7 is tens and 2 is unit. Each position represents a specific power
of the base (10).
The binary number system is a number expressed in the base 2. It uses only digits 0 and 1 to
represent numbers. The binary system is the language the computer understands (known as
machine language) and is used in modern computers and computer-based devices.
The octal number system has the base 8 because it uses 8 digits (0-7). Octal numerals can be
made from binary numerals by grouping consecutive binary digits into groups of three (starting
from the right).
The hexadecimal number system has the base 16 because it uses 16 digits (0-7, A-F).
Octal numerals can be made from binary numerals by grouping consecutive binary digits into
groups of four (starting from the right).
Human beings normally work in decimal and computers in binary. The purpose of the
octal and hexadecimal systems is as an aid to human memory since they are shorter than
binary digits. Moreover, octal and hexadecimal numbers are more compact than binary
numbers (one octal digit equals three binary digits and one hexadecimal digit equals four
binary digits), they are used in computer texts and core-dumps (printout of part of the
computer’s memory). The advantage of binary number is in decoding electrical signals
that switch on (logical one), or switch off (logical zero) a device.
4.0 Summary
In this unit, you have learnt that number base system is a way of representing numbers in
different consistent forms. There are 4 major types of number base system, which are, decimal,
binary, octal decimal, and hexadecimal number system.
5.0 Self-Assessment
a) List the available number base systems
b) Differentiate between the digits used for each number base
6.0 Tutor Marked Assessment
a) Differentiate between the four number base systems
b) Why is decimal number base preferable for human beings and binary for computers?
7.0 Further Reading
https://www.tutorialspoint.com/computer_fundamentals/computer_number_system.htm
https://en.wikipedia.org/wiki/Binary_number
Unit 2 Number Base Conversion
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Conversion of Integers from Decimal to Binary
3.2 Conversion of Integers from Decimal to Octal
3.3 Conversion of Integers from Decimal to Hexadecimal
3.4 Conversion of Integers from Other Bases to Decimal
3.5 Conversion from Binary Integer to Octal
3.6 Conversion from Binary Integer to Hexadecimal
3.7 Conversion from Octal to Binary
3.8 Conversion from Hexadecimal to Binary
3.9 Conversion from Hexadecimal to Octal
3.10 Conversion from Octal to Hexadecimal
3.11 Conversion of Binary Fractions to Decimal
3.12 Conversion of Decimal Fractions to Binary
3.13 Conversion of Binary Fractions to Octal/Hexadecimal
3.14 Conversion of Octal/Hexadecimal Fractions to Binary
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 References
8.0 Further Reading
1.0 Introduction
Number Base conversion involver converting a number from a particular base to others. It
involves representation of numbers in different base. For a example, a number in base 10 can be
converted to base 2, base 8 and base 16 respectively.
11 = 1 × 21 + 1 × 20 = 3; 001 = 0 × 22 + 0 × 21 + 1 × 20 = 1; 011 =0 × 22 + 1 × 21 + 1 × 20 =
3; 101 = 1 × 22 + 0 × 21 + 1 × 20 = 5
Therefore 110010111012 = 31358
3.6 Conversion from Binary Integer to Hexadecimal
The binary number is formed into groups of four bits starting at the binary point. Each group is
replaced by a hexadecimal digit from 0 to 9, A, B, C, D, E, F.
For example, to convert 110010111012 to hexadecimal
110010111012 = 110 0101 1101
110 = 1 × 2 + 1 × 2 + 0 × 20 = 6;
2 1
0101 = 0 × 23 + 1 × 22 + 0 × 21 + 1 × 20 = 5;
1101 = 1 × 23 + 1 × 22 + 0 × 21 + 1 × 20 = 13 = D
Therefore 110010111012 = 65D16
0.011012 = 0.4062510
0.6875 × 2 = 1.3750
0.3750 × 2 = 0.7500
0.7500 × 2 = 1.5000
0.5000 × 2 = 1.0000
0.687510 = 0.10112
We can convert from decimal fractions to octal or hexadecimal fractions by using the same
algorithms used for binary conversions. We only need to change the base (that is: 2, 8, 16).
3.13 Conversion of Binary Fractions to Octal/Hexadecimal
Split the binary digits into groups of three (four for hexadecimal), start grouping bits at the
binary point and move to the right. Any group of digits remaining on the right containing fewer
than three (four for hexadecimal) bit must be made up to three (four for hexadecimal) bit by the
addition of zeros to the right of the least significant bit.
For example, to convert 0.101011002 and 0.101011112 to octal
0.101011002 = 0. 101 011 00(0)2 = 0.5308
0.101011112 = 0. 101 011 11(0)2 = 0.5368
To convert to hexadecimal
0.101011002 = 0. 1010 1100 = 0.AC16
0.1010110012 = 0. 1010 1100 1(000) =0.AC816
3.14 Conversion of Octal/Hexadecimal Fractions to Binary
0.4568 = 0. 100 101 110 = 0.1001011102
0.ABC16 = 0. 1010 1011 1100 = 0.1010101111002
4.0Summary
In this unit, you have learnt conversion of numbers from one base to another, either an integer or
a fraction.
5.0Self-Assessment
Convert the following:
110011002 = ?10, ?8, ?16
4567810 = ?2, ?10, ?8, ?16
6.0Tutor Marked Assessment
Convert
I. 553.35510 = ?2, ?8, and ?16
II. A0716 = ?10, ?8 and ?2
7.0Further Reading
https://code.tutsplus.com/articles/number-systems-an-introduction-to-binary-hexadecimal-and-
more--active-10848
https://www.talentsprint.com/blog/2018/01/number-system-i-what-is-number-syste.html
https://www.varsitytutors.com/hotmath/hotmath_help/topics/number-systems
Algebra has computations similar to arithmetic but with letters standing for numbers which
allows proofs of properties that are true regardless of the numbers involved. For example,
quadratic equation: ax2 + bx + c = 0 where a, b, c can be any number (a≠0). Algebra is used in
many studies, for example, elementary algebra, linear algebra, Boolean algebra, and so on.
3.2 Polynomials
A polynomial involves operations of addition, subtraction, multiplication, and non-negative
integer exponents of terms consisting of variables and coefficients. For example, x2 + 2x − 3 is a
polynomial in the single variable x. Polynomial can be rewritten using commutative, associative
and distributive laws.
An important part of algebra is the factorization of polynomials by expressing a given
polynomial as a product of other polynomials that cannot be factored any further. Another
important part of algebra is computation of polynomial greatest common divisors. x2 + 2x − 3 can
be factored as (x − 1)(x + 3).
Boolean algebra can be used to describe logic circuit; it is also use to reduce complexity of
digital circuits by simplifying the logic circuits. Boolean algebra is also referred to as Boolean
logic. It was developed by George Boole sometime on the 1840s and is greatly used in
computations and in computer operations. The name Boolean comes from the name of the
author.
Boolean algebra is a logical calculus of truth values. It somewhat resembles the arithmetic
algebra of real numbers but with a difference in its operators and operations. Boolean operations
involve the set {0, 1}, that is, the numbers 0 and 1. Zero [0] represents “false” or “off” and One
[1] represents “true” or “on”.
1 – True, on
0 – False, off
This has proved useful in programming computer devices, in the selection of actions based on
conditions set.
1. AND
The AND operator is represented by a period or dot in-between the two operands e.g
- X .Y
The Boolean multiplication operator is known as the AND function in the logic domain;
the function evaluates to 1 only if both the independent variables have the value 1.
2. OR
The OR operator is represented by an addition sign. Here the operation + is different from
that defined in normal arithmetic algebra of numbers. E.g. X+Y
The + operator is known as the OR function in the logic domain; the function has a value
of 1 if either or both of the independent variables has the value of 1.
3. NOT
The NOT operator is represented by X' or X̅.
This operator negates whatever value is contained in or assigned to X. It changes its value
to the opposite value. For instance, if the value contained in X is 1, X' gives 0 as the
result and if the value stored in X is 0, X' gives 1 as the result. In some texts, NOT may
be represented as X̅
To better understand these operations, truth table is presented for the result of any of the
operations on any two variables.
Truth Tables
Truth tables are a means of representing the results of a logic function using a table. They are
constructed by defining all possible combinations of the inputs to a function in the Boolean
algebra, and then calculating the output for each combination in turn. The basic truth table shows
the various operators and the result of their operations involving two variables only. More
complex truth tables can be built from the knowledge of the foundational truth table. The number
of input combinations in a Boolean function is determined by the number of variables in the
function and this is computed using the formula .
Number of input combinations = . Where n is number of variable(s).
For example, a function with two variables has an input combination of =4. Another with
AND
X Y X.Y
0 0 0
0 1 0
1 0 0
1 1 0
OR
X Y X+Y
0 0 0
0 1 1
1 0 1
1 1 1
NOT
X X'
0 1
1 0
Example:
• Draw a truth table for A+BC. • Draw a truth table for AB+BC.
A B C BC A+BC A B C AB BC AB+BC
0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 1 0 0 0
0 1 0 0 0 0 1 0 0 0 0
0 1 1 1 1 0 1 1 0 1 1
1 0 0 0 1 1 0 0 0 0 1
1 0 1 0 1 1 0 1 0 0 0
1 1 0 0 1 1 1 0 1 0 1
1 1 1 1 1 1 1 1 1 1 1
• Draw a truth table for A(B+D).
A B D B+D A(B+D)
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 1 0
1 0 0 0 0
1 0 1 1 1
1 1 0 1 1
1 1 1 1 1
J= f(A,B,C) = A +
A B C A A +
0 0 0 1 1 1 0 1 1
0 0 1 1 1 0 0 0 0
0 1 0 1 0 1 0 0 0
0 1 1 1 0 0 0 0 0
1 0 0 0 1 1 1 0 1
1 0 1 0 1 0 0 0 0
1 1 0 0 0 1 0 0 0
1 1 1 0 0 0 0 0 0
4.0 Summary:
In this unit, you have learnt the operators of Boolean algebra, which are the AND operator
represented with dot (.), the OR operator represented with plus (+) and the NOT operator which
is an inverter. The unit also shows how Boolean operators and variables can be represented using
truth tables. The truth table can be applied to check the validity of a statement, express
arguments as well as reducing complex boolean operators
5.0 Self-Assessment:
Solve the following Boolean functions
a. J= f(A,B,C) = B + B + BC + A
b. b. Z = f(A,B,C) = B + B + BC + A
1.0 Introduction
The De-Morgan’s Theorem is used to solve different Boolean algebra expressions. It states that
the complement of the product of two or more variables is equal to the sum of the complements
of the variables. With de-morgan’s theorem, Boolean algebra can be simplified.
2.0 Learning Outcomes
At the end of this unit, you should be able to:
i. explain the axiomatic relationships using truth tables;
ii. state the order of precedence of a boolean expression; and
iii. list the fundamental importance of boolean algebra
A5 Inverse(a) (b)
To check that the axioms conform to the definitions, Properties A1 are obvious while the following
truth tables verify A2:
A a+0 a a.1
0 0.1=0
0 0+0=0
1 1.1=1
1 1+0=1
a a a+a a.a
0 1 1 0
1 0 1 0
An important feature of Boolean algebra is duality. The set of (b) axioms are said to be duals of
the (a) axioms, and vice versa, in that a (b) axiom can be formed from its (a) counterpart by
exchanging operators and identity elements ‘+’ to ‘.’ and ‘1’ to ‘0’. Thus for every theorem
derived from one particular set of axioms, one can construct a dual theorem based on the
corresponding set of dual axioms.
Any more complex functionality can be constructed from the three basic Boolean operators
(And, Or, and Not) by using DeMorgan’s Law:
I. The complement of a product is equal to the sum of complements
II. The complement of the sum is equal to the product of the complement
Precedence
Order of precedence also exists in Boolean algebra as it exists in other area of mathematics. This
order should be followed in Boolean computations. The Boolean operators defined above have
order precedence as defined here:
NOT operations have the highest precedence, followed by AND operations, followed by OR
operations.
The brackets should be evaluated first to reduce the complexity of the Boolean operation.
Boolean operations are foundational tools used in building computers and electronic devices.
a. It is raining; and
b. The weather forecast is bad.
Let “It is raining” be variable X , “The weather forecast is bad” be Y and the result (taking an
umbrella) be Z.
We can generate truth values in a truth table from this problem statement.
From the statement, if either of the conditions is true, an umbrella would be taken.
In functional terms we can be consider the truth value of the umbrella proposition as the output
or result of the truth values of the other two.
“I will sweep the class only if the windows are opened and the class is empty”.
From this statement, we can get two propositions which are “Windows opened” and “Class
empty”. These two propositions are the variables X and Y respectively.
“Windows opened” – X
“Class empty” – Y
1. Boolean logic forms the basis for computation in modern binary computer systems.
2. They are used in the development of software, in selective control structures (if and
if...else statements).
3. They are used in the building electronic circuits. For any Boolean function you can
design an electronic circuit and vice versa.
4. A computer’s CPU is built up from various combinatorial circuits. A combinatorial
circuit is a system containing basic Boolean operations (AND, OR, NOT), some inputs,
and a set of outputs.
Z = f(A,B) = B+A
A B B A B+A
0 0 1 1 0 0 0
0 1 1 0 1 0 1
1 0 0 1 0 1 1
1 1 0 0 0 0 0
Very complex Boolean functions may result and this can be simplified in two ways:
The basic rules of Boolean algebra are logical in nature. These rules are followed in simplifying
any Boolean function. As stated in the axioms above, the rules are:
1. A+0=A
2. A+1=1
3. A.0=0
4. A.1=A
5. A+A=A
6. A+ =1
7. A.A=A
8. A. =0
9. A̅ =A (double bar)
10. A+AB= A(1+B)=A(1)=A
11. A (B + C) = A B + A C
12. A + (B C) = (A + B) (A + C)
De-Morgan’s theorem
For example:
AA + AC + AB + BC (law 11)
A + AC + AB + BC (law 7)
A(1 + C) + AB + BC (law 10)- (1 + C = 1)
A.1 + AB + BC
A(1 + B) + BC (1 + B = 1)
A.1 + BC (A.1 = A)
Q =A + BC
This implies that the expression: (A + B)(A + C) can be simplified to A + BC
4.0 Summary
De-Morgan’s Theorem states that the complement of the product of two or more variables is
equal to the sum of the complements of the variables. This unit covers, axiomatic relationship,
de-morgan’s theorem, truth table and its precedence.
5.0 Self-Assessment
1. List and explain the axiomatic relationships using truth tables
2. State the order of precedence available in any boolean expression
3. List 3 fundamental importance of boolean algebra
6.0 Tutor Marked Assessment
I. Using Boolean algebra techniques, simplify this expression:
1.0 Introduction
The Karnaugh map popularly called k-map is a diagram that has a rectangular array of squares
each representing a different combination of the variables of a Boolean function. It is a way of
reducing boolean expression complexity. This unit covers karnaugh map with description on
how grouping can be done, labeling, and boolean function reduction.
2.0 Learning Outcomes
At the end of this unit, you should be able to:
i. define k-map;
ii. list the k-map grouping rules; and
iii. to reduce expressions
At this point you have the capability to apply the theorems and laws of Boolean algebra to
simplify logic expressions to produce simpler Boolean functions. Simplifying a logic expression
using Boolean algebra, though not terribly complicated, is not always the most straightforward
process. There isn’t always a clear starting point for applying the various theorems and laws, nor
is there a definitive end in the process. The Karnaugh map is also known as Veitch diagram (K-
map for short). It is a tool to facilitate the simplification of Boolean algebra integrated circuit
expressions. The Karnaugh map reduces the need for extensive calculations by taking advantage
of human pattern-recognition.
The Karnaugh map was originally invented in 1952 by Edward W. Veitch. It was further
developed in 1953 by Maurice Karnaugh, a physicist at Bell Labs, to help simplify digital
electronic circuits. In a Karnaugh map the Boolean variables are transferred (generally from a
truth table) and ordered according to the principles of Gray code in which only one variable
changes in between squares. Once the table is generated and the output possibilities are
transcribed, the data is arranged into the largest even group possible and the minterm is
generated. The K-map is a more straight-forward process of reduction. In the reduction process
using a k-map, 0 represents the complement of the variable (e.g. ) and 1 represents the variable
itself (e.g. B).
1. Consider boxes with ones only. Boxes containing zeros would not be considered.
2. Group 1s in powers of 2. That is 2, 4, 8... ones.
3. Grouping can only be done side to side or top to bottom, not diagonally.
4. Using the same one in more than one group is permissible.
5. The target is to find the fewest number of groups.
6. The top row may wrap around to the bottom row to form a group.
0 1 1 0
1 0 0 1
1 0 0 1
0 1 1 0
Labelling a K-map
In labelling the Karnaugh map, we make use of the principle of the “gray code”.
Labelling a 2-input k-map
A 0 1
0 (00) B (01)
1 A (10) AB (11)
For the 2-input k-map, the values change from 0 to 1 along both axes.
AB C 0 1
00 (000) C (001)
01 B (010) BC (011)
11 AB (110) ABC(111)
10 A (100) A C(101)
In the case of the 3-input k-map, we have A and B on one side if the map and C on the other side
of the map. Using gray code, we start with (00), keeping A constant and changing B, we
have B (01). Now, if we still keep A constant and change B, we will have (00) which
already exists in the map, so, the next thing to do is to keep B constant and then change A. With
this, we will have AB (11) next and then, A (10).
For minimization using the kmap, the value 0 in the truth table, corresponding to a variable is
taken as it’s complement. For instance, if the variable A has the value 0 in the truth table, it is
taken as to fill in the kmap.
It is important to note that in k-map, grey code and in general BCD, only one bit change at a
time. For example 0000 has 4 bit only 1 bit can change at a time.
Consider the k-map:
0 1
1 0
1 1
A 1 0
2
0 0 1 1 0 0 0
0 1 1 0 0 0 0
1 0 0 1 0 1 1
1 1 0 0 1 0 1
0 0
1 1
In k-map, the variable that remains constant across the group is retained. Since the variable B
varies in value (looking at column B̅ and B, the variable changed) and A remains constant, the
constant value across the group is A. A̅ is not used even though it is constant because the value is
0.
Z= AB+A
Z=A
Example 2
J= f(A,B,C) = A +
A B C A A +
0 0 0 1 1 1 0 1 1
0 0 1 1 1 0 0 0 0
0 1 0 1 0 1 0 0 0
0 1 1 1 0 0 0 0 0
1 0 0 0 1 1 1 0 1
1 0 1 0 1 0 0 0 0
1 1 0 0 0 1 0 0 0
1 1 1 0 0 0 0 0 0
A̅ 0
1 0 0
1 0 0
A 0
From the diagram, the value of A changes across the group and the value of B̅C̅ remains the
same.
J = AB̅C̅+A̅B̅C̅ = B̅C̅
An advantage of the k-map over the Boolean algebra method of function reduction is that the k-
map has a definite process which is followed unlike the boolean algebra method which may not
have a particular starting and ending point.
Example 3
CD
0 0 0 1
B 1 1 0 1
AB 1 1 0 1
A 0 0 0 1
1 2
The final answer here after the grouping is derived by looking across the group and eliminating
the variable that changes in value.
For group 1, looking horizontally, D changes in value while C has a constant value of . So, D is
eliminated and retained. Looking vertically, A changes across the group while B remains
constant. So, A is eliminated and B retained.
For the group1, the answer is the AND of the retained variables after elimination and this is B .
We do the same for group 2. Our answer there is CD̅ since vertically across the group, both and
A and B change values.
After doing this for all the groups in the kmap, we then OR the individual results of each group.
4.0 Summary
This unit covers the boolean expression reduction using karnauph map, that is a diagram with a
rectangular array of squares each representing a different combination of the variables of a
Boolean function.
5.0 Self-Assessment
a) Reduce J= f(A,B,C) = A̅B + BC̅ + BC + AB̅C̅ using a k-map
b) Simplify the following Boolean functions:
ABC + ABC + ABC + ABC + ABC
A + B+
c) Consider a Boolean function represented by the truth table below and simplify the
expression using k-map
A B C F
0 0 0 1
0 0 1 1
0 1 0 1
0 1 1 0
1 0 0 0
1 0 1 1
1 1 0 0
1 1 1 0
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Basic Logic Gates
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading
1.0 Introduction
Logic gates are building blocks of digital circuit. Electronic circuits are built using the various
types of gate. The basic gates are AND, OR, and the NOT gate. Other advanced gated are
developed using combinations of the basic gates like NAND, NOR. EXOR, AND EXNOR
2.0 Learning Outcomes
At the end of this unit, you should able:
I. List at least five (5) types of gates
II. Mention the logic function associated with each gate
III. Draw the truth table associated with each gate
3.0 Main Content
3.1 Basic Logic Gates
Logic can be viewed as black boxes with binary input (independent variable) and binary output
(dependent variable). It also refers to both the study of modes of reasoning and the use of valid
reasoning. In the latter sense, logic is used in most intellectual activities. Logic in computer
science has emerged as a discipline and it has been extensively applied in the fields of Artificial
Intelligence, and Computer Science, and these fields provide a rich source of problems in formal
and informal logic.
Boolean logic, which has been considered as a fundamental part to computer hardware,
particularly, the system's arithmetic and logic structures, relating to operators AND, NOT, and
OR.
Logic gates
A logic gate is an elementary building block of a digital circuit. Complex electronic circuits are
built using the basic logic gates. At any given moment, every terminal of the logic gate is in one
of the two binary conditions low (0) or high (1), represented by different voltage levels.
Other gates- NAND, NOR, XOR and XNOR are based on the 3 basic gates.
The AND gate is so called because, if 0 is called "false" and 1 is called "true," the gate acts in the
same way as the logical "and" operator. The following illustration and table show the circuit
symbol and logic combinations for an AND gate.
The output is "true" when both inputs are "true." Otherwise, the output is "false."
The OR gate
The OR gate gets its name from the fact that it behaves after the way of the logical "or." The
output is "true" if either or both of the inputs are "true." If both inputs are "false," then the output
is "false."
A logical inverter, sometimes called a NOT gate to differentiate it from other types of electronic
inverter devices, has only one input. It reverses the logic state (i.e. its input).
As previously considered, the AND, OR and NOT gates’ actions correspond with the AND, OR
and NOT operators.
More complex functions can be constructed from the three basic gates by using DeMorgan’s
Law.
The NAND gate operates as an AND gate followed by a NOT gate. It acts in the manner of the
logical operation "and" followed by negation. The output is "false" if both inputs are "true."
Otherwise, the output is "true". It finds the AND of two values and then finds the opposite of the
resulting value.
The NOR gate is a combination of an OR gate followed by an inverter. Its output is "true" if both
inputs are "false." Otherwise, the output is "false". It finds the OR of two values and then finds
the complement of the resulting value.
The XOR gate
Z=
XOR gate
A B Z
0 0 0
0 1 1
1 0 1
1 1 0
Z=
XNOR gate
A B Z
0 0 1
0 1 0
1 0 0
1 1 1
4.0 Summary
This unit covers logic reasoning, logic gates as well as the logic function and truth tables for
each of the gates. The basic gates are AND , OR and NOT gates, others are NOR, NAND,
EXOR, and EXNOR.
5.0 Self-Assessment
a. List at least five (5) types of gates
b. Mention the logic function associated with each gate
c. Draw the truth table associated with each gate
6.0 Tutor Marked Assessment
a. Draw the physical representation of the AND, OR, NOT and XNOR logic gates.
I. Z= ABC,
Further Reading
https://whatis.techtarget.com/definition/logic-gate-AND-OR-XOR-NOT-NAND-NOR-and-
XNOR
https://www.electronics-tutorials.ws/logic/logic_1.html
http://www.ee.surrey.ac.uk/Projects/CAL/digital-logic/gatesfunc/
7.0 References
Gupta A, Arora S. Industrial automation and robotics. Laxmi Publications; 2009.
https://books.google.com.ng/books?
hl=en&lr=&id=Y7rgCP7iC18C&oi=fnd&pg=PA1&dq=Industrial+Automation+and+Robotics+a.
+K+Gupta+s.K.+Arora&ots=e4KP0Fl_g9&sig=5FeHKe3utUmUlfjaTLFQf-
RbkMY&redir_esc=y#v=onepage&q=Industrial%20Automation%20and%20Robotics%20a.%20K
%20Gupta%20s.K.%20Arora&f=false
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Combinatorial Logic Circuit
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading
1.0 Introduction
Different gates can be combined together to build digital circuits. As learnt in the previous
module, algebraic functions can be reduced using k-map or Boolean reduction. The reduced
logic can related to reduction in cost of building a circuit.
2.0 Learning Outcomes
At the end of this unit, you should be able to
i. Combine different gate to form a logic circuit.
ii. Draw the associated truth table for the logic circuit
3.0 Main Content
3.1 Combinatorial Logic Circuits
With the combinations of several logic gates, complex operations can be performed by electronic
devices. Arrays (arrangement) of logic gates are found in digital integrated circuits (ICs).
As IC technology advances, the required physical volume for each individual logic gate
decreases and digital devices of the same or smaller size become capable of performing much-
more-complicated operations at an increased speed.
Combination of gates
A B C A̅ A̅BC
0 0 0 1 0
0 0 1 1 0
0 1 0 1 0
0 1 1 1 1
1 0 0 0 0
1 0 1 0 0
1 1 0 0 0
1 1 1 0 0
A goes into the NOT gate and is inverted, after this, it goes into the AND gate along with the
variables B and C. The final output at the output terminal of the AND gate is BC. More
complex circuitry can be developed using the symbolic representation in this same manner.
Q= +BC
A B C D E Q
0 0 0 1 0 1
0 0 1 1 0 1
0 1 0 0 0 0
0 1 1 0 1 1
1 0 0 0 0 0
1 0 1 0 0 0
1 1 0 0 0 0
1 1 1 0 0 0
Basically there are 3 variables A, B, and C, do not be confused by the presence of D, E.
Variables A and B goes into a NOR gate, B goes into AND gate along variable C. The B is
reused from the earlier defined one so as not to waste resources or have repetition. The output of
the Nor and And gates serves as input to the Or gate.
Q=A̅B +B
Q= (ABC)(DE)
4.0 Summary
In this chapter, you have learnt how to combine different gates together
5.0 Self-Assessment
i. Combine gates together to draw 4 logic circuits, combining at least 3 gates together in
each.
ii. Draw the logic gate and associated logic circuits for the following functions
A X = A̅BC̅D + FG
B Z= ABC + CDE + ACF
6.0 Tutor Marked Assessment
Write out the logic function of the gates below:
i)
ii)
https://whatis.techtarget.com/definition/logic-gate-AND-OR-XOR-NOT-NAND-NOR-and-
XNOR
https://www.electronics-tutorials.ws/logic/logic_1.html
http://www.ee.surrey.ac.uk/Projects/CAL/digital-logic/gatesfunc/
8.0 References
Gupta A, Arora S. Industrial automation and robotics. Laxmi Publications; 2009.
https://books.google.com.ng/books?
hl=en&lr=&id=Y7rgCP7iC18C&oi=fnd&pg=PA1&dq=Industrial+Automation+and+Robotics+a.
+K+Gupta+s.K.+Arora&ots=e4KP0Fl_g9&sig=5FeHKe3utUmUlfjaTLFQf-
RbkMY&redir_esc=y#v=onepage&q=Industrial%20Automation%20and%20Robotics%20a.%20K
%20Gupta%20s.K.%20Arora&f=false
1.0 Introduction
A program in computing can be regarded as set of instructions. These instructions are used in
executing any given task. The programming languages are known to be the medium through
which human beings communicate with the computer, different languages have evolved over the
years and each has its own target and features. The features of the language are used to measure
its strength and well it will be accepted by the public.
Programming Language is a set of specialized notations for communicating with the computer
system.
Hundreds of programming languages have been developed in the last fifty years. Many of them
remained in the labs and the ones, which have good and more general features, got recognized.
Every language that is introduced comes with features upon which its success is judged. In the
initial years, languages were developed for specific purposes, which limited their scope.
However, as the computer revolution spread affecting common man, the language needed to be
molded to suit all kinds of applications. Every new language inherited certain features from
existing languages and added its own features. The chronology of developments in programming
languages is given below:-
I. The first computer program was made by Lady Lovelace Ada Augusta in 1843 for an
analytical engine application.
II. Konrad Zuse, a German, started a language design project in 1943. He finally developed
plankalkul, programming calculus, in 1945. The language supported bit, integer, floating-
point scalar data, arrays, and record data structures.
III. In early 1950s, Grace Hopper and his team developed A-O Language. During this period,
assembly language was introduced.
IV. The major milestone was achieved when John Backus developed FORTRAN (Formula
Translator) in 1957. The FORTRAN data is oriented around numerical calculations. It was a
major step towards development of full-fledged programming language including control
structures, conditional loops, and input and output statements
V. ALGOL was developed by GAMM (German Society of Applied mathematics) and ACM
(Association of Computing Machinery) in 1960
VI. COBOL (Common business oriented Languages) was developed for business purpose by US
department of defense in 1960.
VII. BASIC (beginner’s All- purpose symbolic instruction code) was developed by John Kemeny
and Thomas Kurtz in 1960’s
VIII. PASCAL was developed by Niklaws around 1970’s. PASCAL was named after French
philosopher, Blaise pascal.
IX. In early 70’s Dennis Ritchie developed C at Bell laboratories using some of the B languages.
Features.
X. C++ was developed by Bjarne Stroustrup in early 1980s extending features of C and
introducing object –oriented features
XI. Java, originally called Oaks, was developed by Sun Microsystems of USA in 1991 as general
purpose language. Java was designed for the development of software for consumer
electronic devices. It was a simple, reliable, portable and powerful language.
A language may be extremely useful for one type of applications. For example, a language such
as cobol, is useful for business application but not for embedded software. On the basis of
application, programming languages can be broadly classified as =
BUSINESS = COBOL
SCIENTIFY = FORTRAN
INTERNET = JAVA
SYSTEM = C, C++
Artificial intelligence (AI): LISP and PROLOG
The features of one programming language may differ from the other. One can be easier and
simple while another can be difficult and complex. The program written for a specific task may
have few lines in one language while many lines in another. The success and strength of a
programming language is judge with respect to standard features. To begin the language
selection process, it is important to establish some criteria that makes a language good. A good
language choice should provide a path into the future in a number of important ways.
(a) Ease of use:- this is the most important factor in choosing a language. The language should
be easy in writing codes for the programs and executing them. The ease and clarity of a language
depends upon its syntax. It should be capable enough to provide clear, simple, and unified set of
concepts. The vocabulary of the language should resemble English (or some other natural
language). Any concept that cannot easily be explained to amateurs should not be included in the
language. Part-time programmers do not want to struggle with difficult concepts; they just want
to get a job done quickly and easily.
(b) Portability:- the language should support the construction of code in a way that it could be
distributed across multiple platforms (operating systems). Computer languages should be
independent of any particular hardware or operating systems, that is, programs written on one
platform should be able to be tested or transferred to any other computer or platform and there it
should perform accurately.
(c) Reliability:- the language should support construction of components that can be expected to
perform their intended functions in a satisfactory manner throughout its lifetime. Reliability is
concerned with making a system failure free, and thus is concerned with all possible errors. The
language should have the support of error detection as well as prevention. It should make some
kinds of errors impossible for example, some errors can be prevented by a strict syntax checking.
Apart from prevention, the language should also be able to detect and report errors in the
program. For example errors such as arithmetic overflow and assertions should be detected
properly and reported to the programmers immediately so that the error can be rectified. The
language should provide reliability by supporting explicit mechanism for dealing with problems
that are detected when the system is in operation.
(d) Safety:- safety is concerned with the extent to which the language supports the construction
of safety critical systems, yielding systems that are fault tolerant, fail-safe or robust in the face of
systemic failures. The system must always do what is expected and be able to recover from any
situation that might lead to a mishap or actual system hazard. Thus, safety tries to ensure that
any failures that occurs result in minor consequences, and even potentially dangerous failures are
handled in a fail-safe fashion. Language can facilitate this through such features as built-in
consistency checking and exceptional handling.
(f) Cost: Cost component is a primary concern before deploying a language at a commercial
level. It includes several costs such as; program execution and translation cost, program
creation, testing and use, program maintenance
(g) Compact Code: A good language should also promote compact coding, that is, the intended
operations should be coded in a minimum number of lines. Even if the language is powerful,
and is not able to perform the task in small amount of codes, then it is bound to be unpopular.
This is the main reason of C language’s popularity over other languages in developing complex
applications. Larger codes require more testing and developing time, thereby increasing the cost
of developing an application.
(h) Maintainability: creating an application is not the end of the system development. It
should be maintained regularly so that it can be modified to satisfy new requirement or to correct
deficiencies. Maintainability is actually facilitated by most of the languages, which makes it
easier to understand and then change the software. Maintainability is closely linked with the
structure of the code. If the original code were written in an organized way (Structural
Programming) then it would be easy to modify or add new changes.
(i) Provides Interface To Other Language:- From the perspective of the language, interface to
other language refers to the extent to which the selected language supports interfacing feature to
other languages. This type of support can have a significant impact on the reliability of the data,
which is exchanged between applications, developed with different languages. In case of data
exchange between units of different languages, without specific language support, no checking
may be done on the data or even on their existence. Hence, the potential for unreliability
becomes high-modern day languages have come a long way and most of the languages provide
interface support for other languages.
(j) Concurrency Support: Concurrency support refers to the extent to which inherent language
supports the construction of code with multiple threads of control (also known as parallel
processing). For some applications, multiple threads of control are very useful or even
necessary. This is particularly true for real time systems and those running on architecture with
multiple processors. It can also provide the programmer with more control over its
implementation. Other features include Reusability and Standardization.
4.0 Summary
In this unit, you have learnt that:
i. List five (5) different programming languages and their authors
ii. State the five (5) features of each programming languages
iii. State the features of a good programming language
5.0 Self-Assessment
a) What is a program?
b) Discuss the evolution of programming language from Ada Lovelace to Java.
6.0 Tutor Marked Assessment
a) An action is to occur at a particular time, is this program? True/False. Justify your
answer.
b) If Cobol is good for business, identity the applications for Java, Pascal, C++, and Basic
c) List and Explain five (5) features of a good programming language
7.0 Further Reading
https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading13.htm
https://en.wikibooks.org/wiki/Introduction_to_Computer_Information_Systems/
Program_Development
http://interactivepython.org/runestone/static/CS152f17/GeneralIntro/Glossary.html
https://pages.uoregon.edu/moursund/Books/PS-Expertise/chapter-9.htm
Unit 2 Classification and Generations of Programming Languages
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Classification of Programming Languages
3.2 Generations of Programming Language
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading
1.0 Introduction
The computer language is machine language that is 0’s and 1’s. Communication with the
computer is via machine language. This language is cumbersome and not easy to remember,
which lead to the development of assembly language and high-level language that are more
English like in nature. The classification and generation of computers is based on machine
language, assembly language and High-level language.
2.0 Learning Outcomes
At the end of this unit, you should be able to:
I. Mention programming languages according to their generations
II. Differentiate between the generations of languages
3.0 Main Content
3.1 Classification of Programming Languages
Computers understand only one language and that is binary language (the language of 0’s and
1’s) also known as machine language. In the initial years of computer programming, all the
instructions were given in binary form only. Although these programs were easily understood by
the computer, it proved too difficult for a human being to remember all the instructions in the
form of 0’s and 1’s. Therefore, the computer remained a mystery to a common man until other
languages such as assembly and high –level languages were developed which were easier to
learn and understood. These languages use commands that have some degree of similarity with
English (such as if else, exit)
Programming languages can be grouped into three major categories: machine language,
assembly (low-level) language and high–level languages.
1. Machine language: Machine language is the native language of computers. It uses only 0’s
and 1’s to represent data and the instructions written in this language, consists of series of 0’s
and 1’s. Machine language is the only language understood by the computer. The machine
language is peculiar to each type of computer.
3. High-level language: these languages are written using a set of words and symbols following
some rules similar to a natural language such as English. The programs written in high –level
languages are known as source programs and these programs are converted into machine-
readable from by using compilers or interpreters
Since early 1950s, programming languages have evolved tremendously. This evolution has
resulted in the development of hundreds of different languages. With each passing year, the
languages become user-friendly and more powerful than their predecessors. We can illustrate the
development of all the language in five generations.
The first language was binary, also known as machine language, which was used in the earliest
computers and machines. We know that computers are digital devices, which have only two
states, ON and OFF (1 and 0). Hence, computers can understand only two binary codes, 1 and 0.
Therefore, every instruction and data should be written using 0’s and 1’s. Machine language is
also known as the computer’s ‘native’ language because this system of codes is directly
understood by the computer.
Advantages of machine language: Even though machine language is not a human friendly
language, it offers certain advantages, as listed below:
i. Translation free: Machine language is the only language that computer can directly execute
without the need for conversion. In fact, it is the only language that computer is able to
understand. Even an application using high level language, has to be converted into machine-
readable form so that the computer can understand the instruction.
ii. High speed: Since no conversion is needed, the application developed using machine
languages are extremely fast. It is usually used for complex application such as space control
system, nuclear reactors, and chemical processing.
ii. Complex languages: Machine language is very difficult to read and write. Since all the data
and instruction must be converted to binary code, it is almost impossible to remember the
instruction. A programmer must specify each operation, and the specific location for each piece
of data and instruction to be stored. It means that a programmer partially needs to be a hardware
expert to have proper control over the machines languages.
iii. Error prone: Since the programmer has to remember all the opcodes (Operation Codes) and
the memory location, it is bound to be error prone. It takes a super human effort to keep track of
the logic of the problems and, therefore, result in frequent programming errors.
iv. Tedious:-Machine language poses real problems while modifying and correcting a program.
Sometimes the programming becomes too complex to modify and the programmer has to re-
program the entire logic again. Therefore, it is very tedious and time consuming, and since time
is a precious commodity, programming using the machine languages tends to be costly.
The complexities of machines languages led to the search of another language and the assembly
language was developed. It was developed in the early 1950s and the main developer was IBM.
Assembly language allows the programmers to interact directly with the hardware. This language
assigns mnemonic codes to each machine language instruction to make it easier to remember or
write. It allows better human- readable method of writing program as compared to writing in
binary bit patterns.
Unlike other programming languages, assembly language is not a single language but a group of
languages. Each processor family (and sometimes individual processors within a processor
family) has its own assembly languages.
An assembly language provides mnemonic instructions, usually three letters long, corresponding
to each machine instruction. The letters are usually abbreviated indicating what the instruction
does: For example, ADD is used to perform an addition operation, MUL for multiplication, and
so on. Assembly languages make it easier for humans to remember how to write instruction to
the computer, but an assembly language is still a representation of the computer’s native
instruction set. Since each type of computer uses a different native instruction set, assembly
languages cannot be standardized from one machine to another, and instructions from one
computer cannot be expected to work on another.
Assembler:
Assembly language is nothing more than a symbolic representation of machine code, which also
allows symbolic designation of memory location. However, no matter how close assembly
language is to machines codes, the computer still cannot understand it. The assembly language
programs must be translated into machine codes by a separate program called Assemblers. The
assembler program recognizes the character strings that make up the symbolic names of the
various machine operations, and substitute the required machine code for each instruction. At the
same time, it also calculates the required address in memory for each symbolic name of a
memory location, and substitutes those addresses for the names resulting in a machine language
program that can run on its own at any time. An assembler converts the assembly codes into
binary codes and then it assembles the machine understandable code into the main memory of
the computer, making it ready for execution.
The original assembly language program is also known as the source code, while the final
machine language program is designated the object code. If an assembly language program needs
to be changed or corrected, it is necessary to make the changes to the source code and then re-
assemble it to create a new object program. The functions of an assembler are given below:
a. It allows the programmer to the use mnemonics while writing source code programs,
which are easier to read and follow.
b. It allows the variable to be represented by symbolic names, not as memory locations.
c. It translates mnemonic operations codes to machine code and corresponding register
addresses to system addresses.
d. It checks the syntax of the assembly program and generates diagnostic messages on
syntax errors.
e. It assembles all the instructions in the main memory for execution.
f. In case of large assembly programs, it also provides linking facility among the
subroutines.
g. It facilitates the generations of output on required output medium.
ii. Less Error Prone: Since mnemonic codes and symbolic addresses are used, the programmer
did not have to keep track of the storage locations of the information and instruction. Hence,
there are bounds to be less error while writing an assembly language program. Even in case of
errors, assembly programs provide better facility to locate and correct them as compared to
machine language programs.
iii. Efficiency: Assembly programs can run much faster and use less memory and other
resources than a similar, program written in a high-level language. Speed increment of 2 to 20
times faster is common, and occasionally, an increase of hundreds of times faster is also possible.
Apart from speed, assembly programs are also memory efficient, that is, the memory
requirement of a program (size of code) is usually smaller than a similar program written in high-
level language.
iv. More Control on Hardware: Assembly language also gives direct access to key machine
features essential for implementing certain kinds of low-level routines such as an operating
system kernel or micro-kernel, device drivers, and machine control.
i. Machine dependent: Different computer architectures have own machine and assembly
languages, which means that programs written in these languages are not portable to other
(incompatible systems). If an assembly program is to be shifted to a different type of computer, it
has to be modified to suit the new environment.
ii. Harder to Learn: The source code for an assembly language is cryptic (has hidden meaning)
and in a very low machine specific form. Being a machine-dependent language, every type of
computer architecture requires different assembly languages, making it nearly impossible for a
programmer to remember and understand every dialect of assembly. More skilled and highly
trained programmers, who know all about the logical structure of the computer, can only create
applications using assembly language.
iii. Slow Development Time: Even with highly skilled programmers, assembly generated
application are slower to develop as compared to high-level language based applications. In case
of assembly language, several lines of assembly code are required for a line of high-level code
the development time can be 10 to 100 times as compared to high-level language generated
application.
iv. Less Efficient: A program written in assembly language is less efficient as compared to an
equivalent machine language program because every assembly instruction has to be converted in
to machine. Therefore, the execution of assembly language program takes more time than it
equivalent machine language program. Moreover, before executing an assembly program, the
assembler has to be loaded in the computer’s memory for translation and it occupies a sizeable
memory of computer.
v. Not Standardized: Assembly language cannot be standardized because each type of computer
has a different instruction set and, therefore, a different assembly language.
During 1960s, computers started to gain popularity and it became necessary to develop
languages that were more like natural languages such as English so that a common user could
use the computer sufficiently. Since assembly language required deep knowledge of computer
architecture, it demanded programming as well as hardware skills to use computers. Due to
computer’s widespread usage, early 1960s saw the emergency of the third programming
languages (3GL) languages such as COBOL, FORTRAN, BASIC, and C are examples of 3GLs
and are considered high-level languages.
Using a high-level language, programs are written in a sequence of statements that impersonates
human thinking to solve a problem. For example, the following BASIC code snippet will
calculate the sum of two numbers.
LET X = 10
LET Y = 20
LET sum = X + Y
PRINT SUM
The first two statement store 10 in variable X ( memory location name) and 20 in variable Y,
respectively the third statement again creates a variable named sum, which will store the
summation of X and Y value.
Finally, the output is printed, that is the value store in sum is printed on the screen. From this
simple example, it is evident that even a novice user can follow the logic of the program.
Since computers understand only machine language, it is necessary to convert the high-level
programs into machine language codes. This is achieved by using language translators or
language processors, generally known as compilers, interpreters or other routines that accepts
statements in one language and produces equivalent statements in another language.
Once the program, has been compiled, the resulting machine code is saved separately, and can
be run on its own at any time, that is, once the object code is generated, there is no need for the
actual source code. However, if the source code is modified then it is necessary to recompile the
program again to effect the changes.
NOTE: for each high-level language a separate compiler is required. For example, a compiler for
C language cannot translate a program written in FORTRAN. Hence, to execute both language
programs, the host computer must have the compilers of both languages.
There are fundamental similarities in the functioning of interpreter and compiler. However, there
are certain dissimilarities also, as given in the Table 5.1 below
Table 5.1: Similarities and dissimilarities between the functions of Interpreter and Compiler
Execution time Compilers are faster as compared Interpreters are slower as compared
to interpreters because all to compilers because each statement
statements are translated only is translated every time it is
once and saved in object files executed from the source program.
which can be executed anytime
without translating again
Nowadays, many languages use a hybrid translator having the characteristics of compiler as well
as interpreter. In such a case, the program is developed and debugged with the help of
interpreters and when the program becomes bug free, the compiler is used to compile it.
(a) Readability: Since high-level languages are closer to natural languages, they are easier
to learn and understand. In addition, a programmer does not need to be aware of computer
architecture even a common man can use it without much difficulty. This is the main reason of
HLL’s popularity.
(b) Machine Independent: High-level language are machine independent in the sense that a
program created using HLL can be used on different platforms with very little or no change at
all.
(c) Easy Debugging:- High-level languages includes the support for ideas of abstraction so that
programmers can concentrate on finding the solution to the problem rapidly, rather than on low-
level details of data representation, which results in fewer errors. Moreover, the compilers and
interpreters are designed in such a way that they detect and point out the errors instantaneously.
Hence, the programs are free from all syntax errors.
(d) Easier to Maintain:- As compared to machine and low-level language, the program written
in HLL are easily modified because HLL programs are easier to understand.
(e) Low Development Cost: High-level language permit faster development of programs
although a high-level program may not be as efficient as an equivalent machine and low-level
programs, but the savings in programmer time generally outweighs the inefficiencies of the
application.
(f) Easy Documentation: Since the statements written in HLL are similar to natural languages,
they can be easily understood by human beings. As a result, the code is obvious, that is, there is
few or no need for comments to be inserted in programs.
i. Poor Control on Hardware: High-level language are developed to ease the pressure on
programmers so that they do not have to know the intricacies (complexity) of hardware. As a
result, sometimes the applications written in high-level languages cannot completely harness the
total power available at hardware level.
ii. Less Efficient: The HLL programs are less efficient as far as computation time is concerned.
This is because, unlike machine language, high-level languages must be created and sent through
another processing program known as a compiler. This process of translation increases the
execution time of an application programs written in high-level language, take more time to
execute, and require more memory space.
Although a number of languages evolved in the last five decades, only few language were
considered worthwhile to be marketed as commercial products. Some of the commonly used
high-level languages were discussed as follows:-
The main feature of FORTRAN is that it can handle complex numbers very easily. However, the
syntax of FORTRAN is very rigid. A FORTRAN program is divided into sub-programs, each
sub-program is treated as a separate unit, and they are compiled separately. The compiled
programs are linked together at load time to make a complete application. It is not suitable for a
large amount of data as well and, hence, it is not often used for business applications.
(b) COBOL: COBOL, or common Business Oriented Language, has evolved after many design
revisions. Grace Murray Hopper, on behalf of US Defense was involved in the development of
COBOL as a language. She showed for the first time that a system could use an English
Language like syntax, suiting to the business notations rather than scientific notations. The first
version was released in 1960 and later revised in 1974 and 1984. COBOL was standardized with
revisions by ANSI in 1968.
COBOL is considered a robust language for the description of Input/ Output formats. It could
cope with large volumes of data. Due to its similarity with English, COBOL programs are easy
to read and write. Since, it uses English words rather than short abbreviations, the instructions
are self-documentary and self-explanatory. However, due to its large vocabulary, the programs
created using COBOL are difficult to translate. COBOL helped companies to perform
accounting work more effectively and efficiently.
(c). BASIC: Beginner’s All –Purpose Symbolic Instruction code, was developed by John
Kemeny and Thomas Kurtz at Darmouth College in the year 1960. It was the first interpreted
language made available for general use. It is now in such widespread use that most people see
and use this language before they deal with others. Presently many advanced versions of BASIC
are available and used in a variety of fields as business, science and engineering.
Basic program were traditionally interpreted. This meant that each line of code had to be
translated as the program was running. BASIC programs, therefore, ran slower than FORTRAN
programs. However, if a BASIC program crashed because of a programming error, it was much
easier to identify the source of the problem and in some cases the program could even be
restarted at the point where it broken down. In BASIC program, each statement is prefixed by a
line number, which serves a dual purpose to provide a table for every statement and to identify
the sequence in which the statement will be executed. BASIC is easy to learn as it uses common
English words. Therefore, it is a good language for beginners to learn their initial programming
skills.
(d). PASCAL: Named after Blaise Pascal, a French philosopher, mathematician, and physicist,
PASCAL was specifically designed as a teaching language. The language was developed by
Niklaus Wirth at the Federal Institute of Technology of Zurich in early 1970s.
PASCAL is a highly structured language, which forces programmers to design programs very
carefully. Its object was to force the student to correctly learn the techniques and requirement of
structured programming. PASCAL was designed to be platform—independent, that is a
PASCAL program could run correctly on any other computer, even with a different and
incompatible type of processor. The result was relatively slow operation, but it did work in its
own fashion.
C consists of rich collection of standard functions useful for managing system resources. It is
flexible, efficient, and easily available. Having syntax close to English words, it is an easy
language to learn and use. The applications generated using C language programs are portable,
that is, the programs written in C can be executed on multiple platforms. C works on a data
structure, which allows a simple data storage. It has the concept of pointers, the memory
addresses of variable and files.
(f). C++: this language was developed by Bjarne Stroustrup in early 1980s. It is the superset of C
and supports object oriented features. This language is used effectively in developing system
software as well as application software. As it was an extension of C, C++ maintained the
efficiency of C and added the power of inheritance. C++ works on classes and objects as a
backbone of object oriented programming. Being a superset of C, it is an extremely powerful and
efficient language. However, C++ is much harder to learn and understand than its predecessor C.
The salient feature of C++ are:-
Strongly typed
Case-sensitive
Platform in dependent
(g). JAVA:- This language was developed by Sun Microsystems of USA in 1991. It was
originally called ‘Dak’. Java was designed for the development of software for consumer
electronic devices. As a result, Java came out to be a simple, reliable, portable, and powerful
language. This language truly implements all the object-oriented features. Java was developed
for internet and contributes a lot to its development. It handled certain issues like portability,
security, networking, and compatibility with various operating systems. It is immensely powered
on web and is used for creating scientific and business applications:-
FOURTH GENERATION: 4 GL
Fourth generation language (4GLs) have simple, English-like syntax rules, commonly used to
access data bases. The third generation programming language are considered as procedural
languages because the programmer must list each step and must use logical control structures to
indicate the order in which instruction are to be executed 4GLs, on the other hand, are non-
procedural languages. The non-procedural method is simply to state the needed output instead of
specifying each step one after another to perform a task. In other words, the computer is
instructed WHAT it must do rather than HOW a computer must perform a task.
The non-procedural method is easier to write, but has less control over how each task is actually
performed. When using non-procedural languages, the methods used and the order in which each
task is carried out is left to the language itself; the user does not have any control over it. In
addition, 4GLs sacrifice computer efficiently in order to make programs easier to write. Hence,
they require more computer power and processing time, however, with the increase in power and
speed of hardware and with diminishing costs, the use of 4GLs have spread.
Fourth generation languages have a minimum number of syntax rules. Hence, people who have
not been trained as programmers can also use such languages to write applications programs.
This saves time and allows professional programmers for more complex tasks. The 4GLs are
divided into three categories:
1. Query Languages: they allow the user to retrieve information from databases by following
simple syntax rules. For example, the database may be requested to locate details of all
employees drawing a salary of more than $10000. Examples of query language are IBMs
structured Query Language (SQL) and Query-By-Example (QBE).
2. Report Generations:- They produce customized reports using data stored in a data base. The
user specifies the data to be in the reports format, and whether any subtotals and totals are
needed. Often report specifications are selected from pull-down menus, making report
generations very easy to use. Examples of report generators are Easytrieve plus by Pansophic
and R&R Relational Report Writer by Concentric Data systems.
3. Application Generations: with application generations, the user writes programs to allow
data to be entered into the database. The program prompts the user to enter the needed data. It
also checks the data for validity. Cincom Systems MANTIS and ADS by cullinet are example of
application generation.
Advantages of 4GLs:
The main advantage of 4GLs is that a user can create an application in a such shorter time for
development and debugging than with other programming languages. The programmer is only
interested in what has to be done and that too at a very high level. Being non-procedural in
nature, it does not require the programmers to provide the logic to perform a task. Therefore, a
lot of programming effort is saved as compared to 3GLs. Use of procedural templates and data
dictionaries allow automatic type checking (for the programmer and for user input) and this
results in fewer errors. Using application generations, the routine tasks are automated.
Disadvantages of 4GLs;
Since programs written in 4GL are quite lengthy, they need more disk space and a large memory
capacity as compared to 3GLs. These languages are inflexible also because the programmers
control over language and resources is limited as compared to other languages. These languages
cannot directly utilize the computer power available at hardware level as compared to other
levels of language.
Fifth generation languages actually is a future concept. They are just like conceptual view of
what might be the future of programming languages. These languages will be able to process
natural languages. The computers would be able to accept, interpret, and execute instructions in
the nature or natural language of the end users. The user will be freed from learning any
programming language to communicate with the computers. The programmers may simply type
the instruction or simply tell the computer via microphones what it needs to do. Since these
languages are still in their infancy, only a few are currently commercially available. They are
closely linked to artificial intelligence and expert systems.
4.0 Summary
In this unit, you have learnt:
i. The classification and generations of computer programming languages
ii. Advantages and disadvantages of each generation
iii. Examples of languages in each generation
5.0 Self-Assessment
A Mention three (3) different programming languages according to their generations
B Differentiate between the generations of languages
C Explain the categories of 4GL
6.0 Tutor Marked Assessment
a. Differentiate between object code and source code
b. Differentiate between compiler and assembler
7.0 Further Reading
http://learnprogramming1.weebly.com/c/difference-between-source-code-and-object-code
https://en.wikibooks.org/wiki/A-level_Computing/AQA/
Computer_Components,_The_Stored_Program_Concept_and_the_Internet/
Fundamentals_of_Computer_Systems/Generations_of_programming_language
https://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol2/mjbn/article2.html