0% found this document useful (0 votes)
771 views

CSC 111 - Introduction To Computer Science - Corrected Version

Introduction to Computer Science at the University of Ilorin.

Uploaded by

Amanda Oladele
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
771 views

CSC 111 - Introduction To Computer Science - Corrected Version

Introduction to Computer Science at the University of Ilorin.

Uploaded by

Amanda Oladele
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 93

UNIVERSITY OF ILORIN

FACULTY OF COMMUNICATION AND INFORMATION SCIENCES

COURSE CODE: CSC 111

COURSE TITLE: INTRODUCTION TO COMPUTER SCIENCE I

Prepared By: Abimbola G. Akintola (Ph.D.)


Introduction
Introduction to Computer Science I is a first semester course. It is a 2-credit course that is
available to students offering Bachelor of Science, (B.Sc.), Computer Science, Information
Systems and Allied degrees.
This course gives students the basic knowledge of Computer Science as a course of study. It will
explore various concepts of computer science and designs. It will also describe the operations
and uses of computer in information processing.
Course Goal
Students are expected to be able to describe basic components of computer science like the
classification of computer in terms of nature of data, generation, and purpose. Other aspects like
Boolean algebra, number system, and introduction to computer programming are were taught.
Related Courses
Prerequisite: Nill
Required for: CSC 112 – Introduction to Computer Science II
CSC 230- Computer Architecture
CSC 321 – Introduction Digital Design and Microprocessors
Learning Outcomes
At the end of this course students will be able to:
I. Highlight the Historical Development of Computer Systems;
II. Mention the advantages, disadvantages and characteristics of Computer
III. Differentiate between computer Software, Hardware and Humanware in relation to their
functions.
IV. Perform various calculations in number systems (binary, decimal, octal decimal, and
hexadecimal)
V. Explain Boolean algebra, Logic Gates and form various logic circuits.
VI. Discuss generations of programming languages
Course Guide
Module 1 Computers
Unit 1 Basic Computing Concepts, History of Computers and its classifications
Unit 2 Basic Components of Computer
Unit 3 Characteristics, Advantages and Disadvantages of Computer
Module 2 Number Bases and Computer Arithmetic
Unit 1 Number Base Arithmetic and Types
Unit 2 Number Base Conversion
Module 3 Boolean Algebra and Karnaugh Map
Unit 1 Boolean Algebra, Fundamentals of Truth tables and Precedence
Unit 2 De-Morgan’s Theorem and reducing complex Boolean functions
Unit 3 Karnaugh Map and Minimization of Expressions
Module 4 Logic Gates
Unit 1 Basic Logic Gates
Unit 2 Combinatorial Logic Circuits
Module 5 Computer Programming Languages
Unit 1 Program and Evolution of Languages
Unit 2 Classification and Generations of Programming Languages

Requirements

Registered students for this course will be provided with login details at the point of registration.

Download and read through the unit of instruction stated for each week before scheduled time of
interaction with the course tutor/ facilitator. You can also download and watch the relevant video
and listen to the podcast so that you will understand and follow the course facilitator

At scheduled time, you are expected to login to the classroom for interaction

Self-assessment component of the courseware are available as exercises to help you learn and
master the content you have gone through.

You are to answer the TMA for each unit and submit for your assessment

Assignments and Grading

Beyond the regular classroom attendance weight will be given to assignments and final
examination as follows:

Tutor Marked Assessment 20%

Continuous Assessment 20%


Final Examination 60%

Total 100%
Unit 1: Definition, History of Computers and its classification
1.0 Introduction.
2.0 Learning Outcomes
3.0 Main Content
3.1 Basic Computing Concepts
3.2 The History of Computer
3.2.1 Abacus
3.2.2 Blaise Pascal
3.2.3 Joseph Marie Jacquard
3.2.4 Charles Babbage
3.2.5 Augusta Ada Byron
3.2.6 Herman Hollerith
3.2.7 John Von Neumann
3.2.8 J. V. Atanasoff
3.2.9 Howard Aiken
3.2.10 Grace Hopper
3.2.11 Bill Gates
3.2.12 Philip Emeagwali
3.3 Computer Classification
3.3.1 Classification by Generation
3.3.2 Classification by Nature of Data
3.3.3 Classification by Size
3.3.4 Classification by Purpose
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading

1.0 INTRODUCTION
This unit covers the basics of the concept a computer, the historic development over the years as
well as its classification. Computers are classified by size, Nature of Data it can process,
Generation, and Purpose.
2.0 Learning Outcomes
At the end of this unit, you should be able to:
I. Define a Computer
II. Describe the development of Computer over the years
III. State the nature of data computer can process
IV. Compare generations of computers
V. Differentiate computer by size or by purpose
3.0 Main Content
3.1 Basic Computing Concepts
A computer can be described as an electronic device that accepts data as input, processes the
data based on a set of predefined instructions called program to produce the result of these
operations as output called information. From this description, a computer can be referred to as
an Input-Process-Output (IPO) system, pictorially represented in the Figure 1:
INPUT PROCESS OUTPUT

Figure 1.1: IPO Representation of a computer System


Data are raw facts, such as a score in examination or the name of a student, for example 55 or
Malik respectively. There are three types of data – Numeric, alphabetic, and alphanumeric.
Numeric data consists of digits 0 – 9 (such as 31), while alphabetic data consist of any of the
English language alphabets in upper and lower cases (e. g. Toyin). An alphanumeric data can
consist of a number, an alphabet or a special character, such as a vehicle plate number (e. g. AE
731 LRN).
Information: data as described above contain no meaning, however, when it is transformed into
a more meaningful and useful form, it is called information. The transformation process
involves a series of operations to be performed by the computer on the raw data that are fed into
the system. The operation can be arithmetic (such as addition, subtraction, multiplication, and
division), logical comparison or character manipulation (as in text processing).
Logical comparison means testing whether one data item is greater than, equal to, or less than
another item, and based on the outcome of the comparison, a specified action can be taken. The
output of the processing can be in form of reports which can be displayed or printed.

3.2 The History of Computer


In the early days of man, fingers and toes were used for counting. Later on, sticks and pebbles
were used. Permanent records of the result of counting were kept by putting marks on the
ground, wall and so on using charcoal, chalk, and plant juice.
The historical development of computing focuses on the digital computer from the Abacus to the
modern electronic computer. Some of these people whose contributions have been widely
acknowledged to the development of Computer will be discussed:
3.2.1 Abacus
The abacus was invented to replace the old methods of counting. It is an instrument known to
have been used for counting as far back as 500 B.C. in Europe, China, Japan and India and it is
still being used in some parts of China today.
The abacus qualifies as a digital instrument because it uses beads as counter to calculate in
discrete form. It is made of a board that consists of beads that slide on wires. The abacus is
divided by a wooden bar or rod into two zones. Perpendiculars to this rod are wires arranged in
parallel, each one representing a positional value. Each zone is divided into two levels - upper
and lower. Two beads are arranged on each wire in the upper zone, while five beads are
arranged on each wire in the lower zone.
The abacus can be used to perform arithmetic operations such as addition and subtraction
efficiently.

Figure 1.2: Modern abacus.


Note that the abacus is really just a representation of the human fingers: the 5 lower rings
on each rod represent the 5 fingers and the 2 upper rings represent the 2 hands.
3.2.2 Blaise Pascal
Pascal was born at Clermont, France in 1623 and died in Paris in 1662. Pascal was a Scientist as
well as a Philosopher. He started to build his mechanical machine in 1640 to aid his father in
calculating taxes. He completed the first model of his machine in 1642 and it was presented to
the public in 1645.
The machine, called Pascal machine or Pascaline, was a small box with eight dials that
resembled the analog telephone dials. Each dial is linked to rotating wheel that displayed the
digits in a register window. Pascal’s main innovative idea was the linkage provided for the
wheels such that an arrangement was made for a carry from one wheel to its left neigbour when
the wheel passed from a display of 9 to 0. The machine could add and subtract directly.

Figure 1.3: Pascal's Pascaline [photo © 2002 IEEE]


A Pascaline opened up so you can observe the gears and cylinders which rotated to display
the numerical result
3.2.3 Joseph Marie Jacquard
In 1801 the Frenchman Joseph Marie Jacquard invented a power loom that could base its weave
(and hence the design on the fabric) upon a pattern automatically read from punched wooden
cards, held together in a long row by rope. Descendents of these punched cards have been in use
ever since.

Figure 1.4: Jacquard's Loom showing the threads and the punched cards

Figure 1.5:By selecting particular cards for Jacquard's loom you defined the woven pattern
[photo © 2002 IEEE]
3.2.4 Charles Babbage
Charles Babbage was born in Totnes, Devonshire on December 26, 1792 and died in London on
October 18, 1871. He was educated at Cambridge University where he studied Mathematics. In
1828, he was appointed Lucasian Professor at Cambridge. Charles Babbage started work on his
analytic engine when he was a student. His objective was to build a program-controlled,
mechanical, digital computer incorporating a complete arithmetic unit, store, punched card input
and a printing mechanism.
The program was to be provided by the set of Jacquard cards. However, Babbage was unable to
complete the implementation of his machine because the technology available at his time was not
adequate to see him through. Moreover, he did not plan to use electricity in his design. It is
noteworthy that Babbage’s design features are very close to the design of the modern computer.
Babbage invented the modern postal system, cowcatchers on trains, and the ophthalmoscope,
which is still used today to treat the eye.
Figure1.6: A small section of the type of mechanism employed in Babbage's Difference
Engine [photo © 2002 IEEE]
3.2.5 Augusta Ada Byron
Ada Byron was the daughter of the famous poet Lord Byron and a friend of Charles Babbage,
(Ada later become the Countess Lady Lovelace by marriage). Though she was only 19, she was
fascinated by Babbage's ideas and through letters and meetings with Babbage she learned enough
about the design of the Analytic Engine to begin fashioning programs for the still unbuilt
machine. While Babbage refused to publish his knowledge for another 30 years, Ada wrote a
series of "Notes" wherein she detailed sequences of instructions she had prepared for the
Analytic Engine. The Analytic Engine remained unbuilt but Ada earned her spot in history as the
first computer programmer. Ada invented the subroutine and was the first to recognize the
importance of looping.
3.2.6 Herman Hollerith
Hollerith was born at Buffalo, New York in 1860 and died at Washington in 1929. Hollerith
founded a company which merged with two other companies to form the Computing Tabulating
Recording Company which in 1924 changed its name to International Business Machine (IBM)
Corporation, a leading company in the manufacturing and sales of computer today.
Hollerith, while working at the Census Department in the United States of America became
convinced that a machine based on cards can assist in the purely mechanical work of tabulating
population and similar statistics was feasible. He left the Census in 1882 to start work on the
Punch Card Machine which is also called Hollerith desks.
This machine system consisted of a punch, a tabulator with a large number of clock-like counters
and a simple electrically activated sorting box for classifying data in accordance with values
punched on the card. The principle he used was simply to represent logical and numerical data in
the form of holes on cards.
His system was installed in 1889 in the United States Army to handle Army Medical statistics.
He was asked to install his machine to process the 1890 Census in USA. This he did and in two
years, the processing of the census data was completed which used to take ten years. Hollerith’s
machine was used in other countries such as Austria, Canada, Italy, Norway and Russia.
Figure 1.7: Hollerith desks [photo courtesy The Computer Museum

3.2.7 John Von Neumann


Von Neumann was born on December 28, 1903 in Budapest, Hungary and died in Washington
D. C. on February 8, 1957. He was a great mathematician with significant contribution to the
theory of games and strategy, set theory and the design of high speed computing machines. In
1933, he was appointed one of the first six professors of the school of mathematics in the
institute for Advanced Study at the Princeton University, USA, a position he retained until his
death.
Neumann with some other people presented a paper titled “The Preliminary discussion of the
Logical Design of an Electronic Computing Instrument” popularly known as Von Neumann
machine. This paper contains revolutionary ideas on which the present-day computers are based.
The machine has Storage, Control, Arithmetic and input/output units. The machine was to be a
general-purpose computing machine. It was to be an electronic machine and introduced the
concept of stored program. This concept implied that the operations in the computer were to be
controlled by a program stored in the memory of the computer. This program was to consist of
codes that intermixed data with instructions.
As a result of this, it became possible for computations to proceed at electronic speed, perform
the same set of operations or instructions repeatedly and the concept of program counter, which
implied that whenever an instruction is fetched, the program counter which is a high-speed
register automatically contains the address of the instruction to be executed next.
3.2.8 J. V. Atanasoff
One of the earliest attempts to build an all-electronic digital computer occurred in 1937 by J. V.
Atanasoff, a professor of physics and mathematics at Iowa State University. By 1941 he and his
graduate student, Clifford Berry, had succeeded in building a machine that could solve 29
simultaneous equations with 29 unknowns. This machine was the first to store data as a charge
on a capacitor, which is how today computers stored information is in their main memory. It was
also the first to employ binary arithmetic. However, the machine was not programmable, it
lacked a conditional branch, its design was appropriate for only one type of mathematical
problem, and it was not further pursued after World War II.
Figure 1.8: The Atanasoff-Berry Computer [photo © 2002 IEEE]
3.2.9 Howard Aiken
Howard Aiken of Harvard was the principal designer of the Mark I. The Harvard Mark I
computer was built as a partnership between Harvard and IBM in 1944. This was the first
programmable digital computer made in the U.S. But it was not a purely electronic computer.
Instead the Mark I was constructed out of switches, relays, rotating shafts, and clutches. The
machine weighed 5 tons, incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had
a 50ft rotating shaft running its length, turned by a 5 horsepower electric motor. The Mark I ran
non-stop for 15 years.

Figure 1.9: The Harvard Mark I: An electro-mechanical computer

Figure 1.10: One of the four paper tape readers on the Harvard Mark I
3.2.10 Grace Hopper
Grace Hopper was one of the primary programmers for the Mark I. Hopper found the first
computer "bug": a dead moth that had gotten into the Mark I and whose wings were blocking the
reading of the holes in the paper tape. The word "bug" had been used to describe a defect since at
least 1889 but Hopper is credited with coining the word "debugging" to describe the work to
eliminate program faults.

Figure 1.11: The first computer bug [photo © 2002 IEEE]


In 1953 Grace Hopper invented the first high-level language, "Flow-matic". This language
eventually became COBOL which was the language most affected by the infamous Y2K
problem. A high-level language is designed to be more understandable by humans than is the
binary language understood by the computing machinery. A high-level language is worthless
without a program -- known as a compiler -- to translate it into the binary language of the
computer and hence Grace Hopper also constructed the world's first compiler. Grace remained
active as a Rear Admiral in the Navy Reserves until she was 79.

3.2.11 Bill Gates


William (Bill) H. Gates was born on October 28, 1955 in Seattle, Washington, USA. Bill Gates
decided to drop out of college so he could concentrate all his time writing programs for Intel
8080 categories of Personal Computers (PC). This early experience put Bill Gates in the right
place at the right time once IBM decided to standardize on the Intel microprocessors for their
line of PCs in 1981. Gates founded a company called Microsoft Corporation (together with Paul
G. Allen) and released its first operating system called MS-DOS 1.0 in August, 1981 and the last
of its group in (MS-DOS 6.22) April, 1994. Bill Gates announced Microsoft Windows on
November 10, 1983.

3.2.12 Philip Emeagwali


Philip Emeagwali was born in 1954, in the Eastern part of Nigeria. He had to leave school
because his parents couldn't pay the fees and he lived in a refugee camp during the civil war. He
won a scholarship to university. He later migrated to the United States of America. In 1989, he
invented the formula that used 65,000 separate computer processors to perform 3.1 billion
calculations per second.
Philip Emeagwali, a supercomputer and Internet pioneer is regarded as one of the fathers of the
internet because he invented an international network which is similar to, but predates that of the
Internet. He also discovered mathematical equations that enable the petroleum industry to
recover more oil. Emeagwali won the 1989 Gordon Bell Prize, computation's Nobel Prize, for
inventing a formula that lets computers perform the fastest computations, a work that led to the
reinvention of supercomputers.
3.3 Computer Classifications
Computer can be classified into various forms because of the complexity and diversification in
its application. Four basic classifications will be adopted. These are classification according to
generations, classification according to Nature of Data, classification according to size and
classification according to purpose.
3.3.1 Classification by Generation
There are five basic generations to which computer can be classified based on year and
components.
I. First Generation Computer: These were the early computers that were manufactured in
1940 and lasted till 1956. The first generation computers were characterized with the use
of vacuum tubes as its major components. These vacuum tubes generate enormous heat
and consume much electricity. The first generation computers introduce the concept of
stored programs. Exclusively, computer experts can program the computer only in
machine language, which makes it programmable. Examples are UNIVAC (Universal
Automatic Computer), ENIAC (Electronic Numerical Integrator and Calculator) etc.
Figure 1.12: UNIVAC I supervisory control console
II. Second Generation Computer: These were the set of computers that succeeded the first
generation computers. Their advent was from 1956 and lasted until 1963. The
components of the second generation of computer were built around transistors which
replaces the vacuum tube in the first generation. The resultant effect of the transistors in
place of vacuum tube is reduction in size compared with first generation computers, less
power consumption, generation of less heat and improved storage facility due to
introduction of magnetic devices for storage medium. The overall effects are the
improved reliability and introduction of symbolic languages for programming. Examples
are ATLAS, IBM 1400 series (International business Machine), PDP I & II (Programmed
Data Processor I & II) etc.

Figure 1.13: Second Generation Computer


III. Third Generation Computer: This generation of computer succeeded the second
generation computers. The advent of this generation was between 1964 to late 1971. Due
to the technological advancement that had taken place in the industrial sector which
makes many transistors to be coupled into a single unit component. Hence, the major
component that characterized the third generation computer is the Integrated Circuitry
(IC), which is a resultant effect of thousands of transistors combined into a single unit
component. The integration of transistors into one component makes the computer
smaller in size compare with first and second generation computers, faster machine,
consume less power and generate less heat. The concept of multi-programming was
introduced in this generation of computers. Programming was made easier by the use of
high-level languages. Examples include: IBM 360/370 series, ICL 1900 series
(International Computers Limited) etc.

Figure 1.14: Third Generation Computer


IV. Fourth Generation Computer: The emphasis in the first three generations of computers
has been on the development of a computer system that is less expensive, more portable
and highly reliable. The fourth generation computers were also developed having in mind
the above assertions. These generations of computers were built around Very Large-Scale
Integrated Circuitry (VLSI) in which over ten thousand flip-flops were placed in a single
silicon chip i.e. thousands of ICs were combined into a single chip. This period witnessed
the era of microcomputer with the introduction of microprocessors as its major
component. The system came into being in 1971 and still in existence till date. Examples
include: IBM, COMPAQ 2000 series, Dell series, Toshiba etc.
Figure 1.15: Fourth Generation Computer

V. Fifth Generation Computer: The development of the fifth generation computers started
in the 1980’s and classical researches are still going on in this generation of computers. It
is regarded as present and future of computers. Although, some of these machines are
already in use, but a lot of work still need to be done to actualize the reasons for this
generation of computer. The objective of this computer system is to build a computer
system that mimic the intelligence of human expert in a knowledge domain such as
medicine, law, education, criminal investigation, etc. This objective is achieved through
the implementation of Artificial Intelligence and Expert Systems development.

Figure 1.16: Fifth Generation Computer


Today, sixth, seventh and eight generations are available in the market

3.3.2 Classification by Nature of Data


There are two types of data the computer can process. They are Digital and Analogue. Computer
types can be classified into three. These are Analogue, Digital and Hybrid computers.
I. Analogue Computer: This type of computer deals with quantities that are continuously
varying. It measures changes in current, temperature or pressure and translates these data
into electrical pulses for processing. Examples are speedometer, electric meter, water
meter, thermometer etc.
II. Digital Computer: This operates on data representation in form of discrete values or
digits (e.g 0,1,2,3,X,Y,Z,…). They handle numbers discretely and precisely rather than
approximately. These types of computers are very common in use both at home and
offices.
III. Hybrid Computer: These types of computers combine the features of both analogue and
digital computers. They handle data in both discrete quantity and variable quantities.
They are mostly found in industrial processes for data acquisition and data processing
purposes. In most cases, analogue signal generated from the analogue computer needs to
be converted to digital signal which has to be processed by the digital computer, hence,
the need for Analogue-to-Digital Converter and Digital-to-Analogue Converter
modulator/demodulator (Modem).

3.3.3 Classification by Size


Computer classifications based on sizes are as follows:
I. Mainframe Computer: This is a system that has a very powerful Central Processing
Unit (CPU) linked by cable to hundreds or thousands of terminals, this system is cable of
accepting data simultaneously from all the terminals at the same time. This type of
computers are very big and very expensive general-purpose computers with memory
capacity more than 100 million bytes and processing power of well above 10 million
instructions per second (MIPS). Mainframe computers are used in large organizations
such as Banks, Oil companies, big hospitals, airline reservations companies, examination
bodies such as WAEC, NECO, JAMB etc. that have very large volumes of data to
process which also need to be adequately secured. Examples include ICL 1900 and IBM
360/370 series, IBM 704 etc.
II. Minicomputer: This type of computer shares similar features with mainframe computer
but it is smaller in physical size; generate lower heat, less instruction set and less
expensive. It requires the same conditions for its operation like mainframe; such
conditions are very cool environment because of the enormous heat being generated,
raise or false floor, dust free environment, high-secured office accommodation. Examples
of minicomputers are IBM AS/400, NCR Tower 32, DEC System, PDP 7 etc.
III. Microcomputer: These types of computer are much smaller in size compare to mini and
mainframe computers. They are ridiculously cheaper in terms of naira value compare to
either mainframe or minicomputer. On these systems, various integrated circuits and
elements of computer are replaced by a single integrated circuit called “chip”.
Microcomputer was first developed by companies like Apple Computers and later by
IBM PC in 1981. It is also called Personal Computer (PC).
IV. Super Computers: The super computers are extraordinarily powerful computers and
they are the largest and fastest computer systems in recent time. They provide high level
of accuracy, precision and speed for mathematical computations, meteorological,
astronomical and oil exploration applications. In most of the Hollywood’s movies it is
used for animation purposes. It is also helpful for forecasting weather reports worldwide.
Examples are Cray-1, Cyber series, Fujistu, ETA-10 system. Most of these machines are
not available for commercial use.
V. Notebook Computers: The have small size and low weight. The notebook is easy to
carry anywhere. This is easy to carry around and preferred by students and businessmen
and women to meet their assignments and other necessary tasks. The approach of this
computer is also the same as the personal computer. It is a replacement of personal
desktop or microcomputer. Also referred to as laptop. E.g HP 530, Dell etc
Other types of computers based on size are palmtop and PDA (Personal Digital Assistant).

3.3.4 Classification by Purpose


There are two classifications to which computer are classified according to its usage or function.
These are special purpose and general-purpose computers.
I Special Purpose Computer: This type of computer is specially developed to perform
only one task. The system is highly efficient and economical but lacks flexibility. The program
for the machine is built into the machine permanently. Some areas of usage include air traffic
control system; military weapons control system, ship navigation system and industrial process
controls.
II General Purpose Computer: These computers have the ability to handle a wide variety
of programs and to solve many problems such as payroll, numerical analysis, software
development for accounting, inventory system etc. It makes use of stored program for switching
from one application to another.

4.0 Summary
In this unit, you have learnt that:
 A computer is an electronic device that accepts data as input, process the data through
predefined instructions to produce information.
 The historical development of computer was from the Abacus to the modern electronic
computer.
 Computers can be classified based on its generation, nature of data it can process, its size
as well as the purpose for which they are built.
5.0 Self-Assessment
A. Discuss the contribution(s) of any three (3) persons to the history of computers
B. With appropriate examples, differentiate between Analogue and Digital Computers
C. Describe classification of Computers by Purpose

6.0 Tutor Marked Assessment


A. List two (2) examples of computers classified as special purpose computers
B. Differentiate between Mainframe Computer and Super Computers
C. Notebook Computer is a replacement for microcomputer. Yes/No, Justify your answer.
D. Identify the major component or characteristic of the first four generations of Computers
E. State the contribution(s) of John Von Neumann to the history of Computer

7.0 Further Reading


https://en.wikipedia.org/wiki/Herman_Hollerith
http://www.byte-notes.com/types-computers-purpose
https://www.webopedia.com/DidYouKnow/Hardware_Software/FiveGenerations.asp
https://www.tutorialspoint.com/computer_fundamentals/computer_first_generation.htm
https://en.wikipedia.org/wiki/File:Hollerith.jpg
http://www.computerhistory.org/babbage/
http://www.computerhistory.org/babbage/engines/
https://www.computerhope.com/jargon/p/punccard.htm
https://www.computerhistory.org/revolution/early-computer-companies/5/100

8.0 References
Egbewole, W. and Jimoh R. (Eds.). (2017) Digital Skill Acquisition. Ilorin, Nigeria: Unilorin
Dale, N. (2005), Computer Science Illuminated. London: Jones and Bartlett,.
French C. (2001). Introduction to Computer Science. London: Continuum.
Unit 2: Basic Components of Computer
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Basic Component of Computer
3.2 The Hardware
3.3 The Software
3.3.1 System Software
3.3.2 Application Software
3.4 The Humanware
3.5 Organizational Structure of a Typical Computer Installation
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading

Unit 2: Basic Components of Computer


1.0 Introduction
The computer can be divided into three (3) different components: hardware, software and the
humanware. The hardware refers to visible component of the system, while the software refers to
the non-visible component of the computer system. The human-ware deals with the human
intervention that the computer needs to perform its tasks.

2.0 Learning Outcomes


At the end of this unit, you should be able to:
I. Differentiate between hardware, software and humanware
II. Understand the organization structure of a computer installation

3.0 Main Content


3.1 Basic Component of Computer
Components of Computer refer to physical and the non-physical part of the system. A computer
system can be divided into hardware, software and humanware

3.2 The Hardware


The hardware refers to the physical components and the devices which make up the visible
computer. It can be divided into two: Central Processing Unit (CPU) and the Peripherals. The
CPU is responsible for all processing that the computer does while the peripherals are
responsible for feeding data into the system and for collecting information from the system.

The CPU consists of Main storage, ALU and Control Unit. The main storage is used for
storing data to be processed as well as the instructions for processing them. The ALU is the unit
for arithmetic and logical operations. The control unit ensures the smooth operation of the other
hardware units. It fetches instruction, decode (interprets) the instruction and issues commands to
the units responsible for executing the instructions.

The peripherals are in three categories: Input devices, Output devices and auxiliary storage
devices.

The input device is used for supplying data and instructions to the computer. Examples are
terminal Keyboard, Mouse, Joystick, Microphone, Scanner, Webcam, etc.

Output device is used for obtaining result (information) from the computer. Examples are
Printers, Video Display Unit (VDU), loudspeaker, projector etc,

Auxiliary Storage Devices are used for storing information on a long-term basis. Examples are
hard disk, flash disk, magnetic tape, memory card, solid state drive SDD etc.

A simple model of the hardware part of a computer system is shown below:

Peripherals

Input
Unit Auxiliary Output
Storage Unit
Unit

Main Memory

Central
Processing Arithmetic
Unit and Logic
Unit

Control Unit

Data Flow Signals/Commands

Figure 2.1: Hardware part of a computer system


3.3 The Software
Software are basically referred to as programs. A program consists of sequence of instructions
required to accomplish well-defined tasks. Examples of such tasks include:

1. Finding the average score of a student

2. Computing the net pay of an employee

3. Solving a set of simultaneous linear equations

It is the software that enables the hardware to be put into effective use. There are two main
categories of software – System software and Application software.

3.3.1 System Software

System software are programs commonly written by computer manufacturers, which have direct
effect on the control, performance and ease of usage of the computer system. Examples are
Operating System, Language Translators, Utilities and Service Programs, and Database
Management Systems (DBMS).

Operating System is a collection of program modules which form an interface between the
computer hardware and the computer user. Its main function is to ensure a judicious and
efficient utilization of all the system resources (such as the processor, memory, peripherals and
other system data) as well as to provide programming convenience for the user. Examples are
Unix, Linux, Windows, Macintosh, and Disk Operating system.

Language Translators are programs which translate programs written in non-machine


languages such as FORTRAN, C, Pascal, and BASIC into the machine language equivalent.
Example of language translators are assemblers, interpreters, compilers and preprocessor.

 Assemblers: This is a program that converts program written in assembly language (low
level language) into machine language equivalent.

 Interpreter: This is a program that converts program written in high level language
(HLL) into its machine language (ML) equivalent one line at a time. Language like
BASIC is normally interpreted.

 Compiler: This is a program that translates program written in high level language
(HLL) into machine language (ML) equivalent all at once. Compilers are normally called
by the names of the high-level language they translate. For instance, COBOL compiler,
FORTRAN compiler etc.

 Preprocessor: This is a language translator that takes a program in one HLL and
produces equivalent program in another HLL. For example, there are many preprocessors
to map structured version of FORTRAN into conventional FORTRAN.
Database Management System (DBMS) is a complex program that is used for creation,
storage, retrieving, securing and maintenance of a database. A database can be described as an
organized collection of related data relevant to the operations of a particular organization. The
data are stored usually in a central location and can be accessed by different authorized users.

Linker is a program that takes several object files and libraries as input and produces one
executable object file.

Loader is a program that places an executable object file into memory and makes them ready for
execution. Both linker and loader are provided by the operating system.

Utility and Service Programs

These are programs which provide facilities for performing common computing tasks of a
routine nature. The following are some of the examples of commonly used utility programs:

 Sort Utility: This is used for arranging records of a file in a specified sequence
(alphabetic, numerical or chronological) of a particular data item within the records. The
data item is referred to as the sort key.

 Merge Utility: This is used to combine two or more already ordered files together to
produce a single file.

 Copy Utility: This is used mainly for transferring data from a storage medium to the
other, for example from disk to tape.

 Debugging Facilities: These are used for detecting and correcting errors in program.

 Text Editors: These provide facilities for creation and amendment of program from the
terminal.

 Benchmark Program: This is a standardized collection of programs that are used to


evaluate hardware and software. For example, a benchmark might be used to compare the
performance of two different computers on identical tasks, assess the comparative
performance of two operating systems etc.

3.3.2 Application Software

These are programs written by a user to solve individual application problem. They do not have
any effect on the efficiency of the computer system. An example is a program to calculate the
grade point average of all the 100L students. Application software can be divided into two
namely: Application Package and User’s Application Program. When application programs
are written in a very generalized and standardized nature such that they can be adopted by a
number of different organizations or persons to solve similar problem, they are called
Application Packages. There are a number of micro-computer based packages. These include
word processors (such as Ms-word, WordPerfect, WordStar); Database packages (such as
Oracle, Ms-access, Sybase, SQL Server, and Informix); Spreadsheet packages (such as Lotus 1-
2-3 and Ms-Excel); Graphic packages (such as CorelDraw, Fireworks, Photoshop etc), and
Statistical packages (such as SPSS). User’s Application Program is a program written by the
user to solve specific problem which is not generalized in nature. Examples include writing a
program to find the roots of quadratic equation, payroll application program, and program to
compute students’ results.

3.4 The Human-ware

Although, the computer system is automatic in the sense that once initiated, it can, without
human intervention, continue to work on its own under the control of stored sequence of
instructions (program), however, it is not automatic in the sense that it has to be initiated by a
human being, and the instructions specifying the operations to be carried out on the input data
are given by human being. Therefore, apart from the hardware and software, the third element
that can be identified in a computer system is the human-ware. This term refers to the people
that work with the computer system. The components of the human-ware in a computer system
include the system analyst, the programmer, data entry operator, end users etc.

3.5 Organizational Structure of a Typical Computer Installation


The following diagram shows the organizational structure of a typical computer installation

DPM

System Development Operations Team

System Analysts Programmers Operators Control Clerks

Data Entry Operator

Figure 2.2:Organizational Structure of a typical computer installation

Data Processing Manager (DPM) supervises every other persons that work with him and is
answerable directly to the management of the organization in which he works.

A System Analyst is a person that understudies an existing system in operation in order to


ascertain whether or not computerization of the system is necessary and/or cost-effective. When
found necessary, he designs a computerized procedure and specifies the functions of the various
programs needed to implement the system.

A Programmer is the person that writes the sequence of instructions to be carried out by the
computer in order to accomplish a well-defined task. The instructions are given in computer
programming languages.
A data entry operator is the person that enters data into the system via keyboard or any input
device attached to a terminal. There are other ancillary staffs that perform other functions such as
controlling access to the computer room, controlling the flow of jobs in and out of the computer
room.

An end-user is one for whom a computerized system is being implemented. The end-user
interacts with the computerized system in their day-to-day operations of the organization. For
example a cashier in the bank who receives cash from customers or pays money to customers
interacts with the banking information system.

4.0 Summary
In this unit, you have learnt that:
 Hardware are physical components of the system, Software are not physical and are also
referred to as programs, while humanware is the interaction between man and computer
 The organization structure includes the data processing analyst, data entry operator, end
user and so on
5.0 Self-Assessment
1. Differentiate between hardware and software
2. Draw the diagrammatic representation of a computer installation

6.0 Tutor Marked Assessment


i. Explain the role of the following personnel in an organization:
a) Data Entry Operator
b) Programmer
c) System Analyst
d) Data Processing Manager
ii. Define the following terms:
a) Assembler b) Compiler c) Interpreter d) Preprocessor

7.0 Further Reading


https://en.wikipedia.org/wiki/Application_software
http://dspace.mit.edu/bitstream/handle/1721.1/47060/computerizedmana00zann.pdf?
sequence=1%20?iframe=true&width=100%&height=100%
http://www.informit.com/articles/article.aspx?p=29470&seqNum=3

8.0 References
Egbewole, W. and Jimoh R. (Eds.). (2017) Digital Skill Acquisition. Ilorin, Nigeria: Unilorin
Unit 3 Characteristics and Advantages of Computers
1.0 Introduction.
2.0 Learning Outcomes
3.0 Main Content
3.1 Characteristics of Computers
3.2 Advantages of Computers
3.3 Disadvantages of Computer
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 References
8.0 Further Reading
1.0 Introduction
In this unit, you will learn the characteristics of a computer. These are the features or attributes
that the machine must possess for it to be regarded as a computer. The advantages of computers
are the reward the user or society at large get in the use of computers while the disadvantages are
the negative aspect of computer systems. Today, people use computers to make work easier and
faster, as well as to reduce the overall cost of completing a task.

2.0 Learning Outcomes


At the end to this unit, you should be able to:
I. List the characteristics of Computer
II. List the five (5) Advantages of Computer
III. Mention five (5) Disadvantages of Computer
3.0 Main Content
3.1 Characteristics of Computers
The computer system is characterized by the following:
I. Electronic in nature: Data are represented in form of electrical pulses. The basic
component and the operation of computers are electronic, using integrated circuits (ICs).
II. Speed: Computers can perform millions of calculations in a few seconds as compared to
man who will spend months for doing the same task.
III. Accuracy: computers perform tasks with 100% accuracy provided that correct input has
been given. It is garbage in garbage out (GIGO) i.e., given accurate input it generates
accurate output, and given a wrong input it gives a wrong output.
IV. Consistency: that is given the same set of input data, the same result will always be
produced
V. Iterative: ability to perform repetitive operations without getting bored or fatigued
VI. Storage: could store data/information on a long term basis. Variety of data/information
such as text, images, audio and video.
VII. Automatic control: once initiated, it could operate on its own, without human
intervention, under the control of stored sequences of instructions called program
VIII. No Feelings: Computer does not have emotions or feelings. Though some robots are now
created to have feelings
3.2Advantages of Computers
The use of computer technology as affected virtually every aspect of life. Although, it has
disadvantages, the advantages overweigh the disadvantages. Below are some of the advantages:

I. Sort, Organize, and Search Information

The computer can use its stored information better than human. Information in the computer can
be sorted or organized into different categories. The information can also be searched faster.

II. Better Understanding of Data

Through different fields of computer technology for example data mining, computers understand
data better than humans and can use the information learnt to predict future occurrences to a
certain extent. For example, using the database, the computer can predict that at any time bread
is sold, eggs or butter or beverage is sold along. This will help the business owner to know that
s/he must not run out of eggs or butter or beverage when bread is available for sale

III. Connection to the Internet

The computer helps to connect to the internet, where choices available are endless once
connected. Many advantages of the computer today are through connections to the internet.

IV. Improves your abilities

The computer can help its user to improve several aspects. For example, use of the spell checker,
grammar corrector and so on. The user improves his/her abilities if s/he has a hard time learning.

V. Assist the physically challenged

Computers are very excellent tools that can be used to help the physically challenged. For
example, Speech recognition, where the user gets to type and the computer reads it out. This can
help a physically challenged user that cannot speak. Computers are also great tools for the blind
with special software that can be installed to read what is on the screen.
VI. Entertainment

The computer can keep its user entertained. Different songs, videos, as well as games can be
stored on the computer for use.
3.3 Disadvantages of Computers
Though the advantages of computer devices are numerous, there still exit some disadvantages.
Some of the disadvantages are:
A. Attack
Attack can be in form of virus and hacking. Virus usually spread malicious substances on the
system while hacking occurs through unauthorized access of the system.

B. Online Cyber Crimes

Online cyber-crime means computer and network may have been used to commit crime.
Cyberstalking and Identity theft are at common cybercrimes of today. 

C. Less Human Interactions

Computers have made it possible for people to work alone on tasks on which they would need to
collaborate with others if they were to have to do them manually, such as inputting figures in
books of accounts. Therefore, the work environment in offices is such that all workers
concentrate on just looking at the computer with little interaction with colleagues.

D. Reduction in employment opportunity


4.0 Summary
In this unit, you have learnt that:
 The attributes that make a technology computer includes speed, storage, accuracy,
consistency and many more
 Advantages of computer include but not limited to Entertainment, improve personal
ability, better understanding of data, Aid to the physically challenged users and so on.
 Disadvantages of a computer include reduction in employment opportunities, attacks, and
less human interaction
5.0 Self-Assessment
a) List five (5) advantages of computer to the society
b) List Five (5) characteristics of computer
6.0Tutor Marked Assessment
a) The computer is electronic in nature. True/False. Justify your answer.
b) Identify three (3) fields of computer technology that serves as aid to the humans
c) The computer is said to be garbage in garbage out, explain with appropriate example the
meaning of the phrase

7.0 References
Egbewole, W. and Jimoh R. (Eds.). (2017) Digital Skill Acquisition. Ilorin, Nigeria: Unilorin
8.0 Further Reading
http://oer.nios.ac.in/wiki/index.php/characteristics_of_computers
http://ecomputernotes.com/fundamental/introduction-to-computer/what-are-characteristic-
of-a-computer
http://www.byte-notes.com/advantages-and-disadvantages-computers
https://www.computerhope.com/issues/ch001798.htm
Module 2 Number Bases and Computer Arithmetic

Unit 1 Number Base Arithmetic and Types

Unit 2 Number Base Conversion

Unit 1 Number Base Arithmetic and Types

1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Number Base Arithmetic
3.2 Number Base Types
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading

1.0 Introduction
Number base is a way of representing numbers in computing. There are different types of
number base system which means numbers can be represented in any of the bases. The number
base we will cover includes decimal number system which uses digits 0-9, binary number system
with digits 0 and 1, Octal number system using digits 0 - 7 and hexadecimal system with digits
0-9 and alphabet A-F

2.0 Learning Outcomes


At the end of this unit, you should be able to:

 The meaning of number base


 The different types of number base
 The digits used for each number base.
3.0 Main Content
3.1 Number Base Arithmetic

The number system is a writing system for representing numbers of a given set, using
digits or other symbols in a consistent manner. The Numbers in the decimal system are
represented by means of positional notation. That is, the value or weight of a digit
depends on its location within the number. A number N, when expressed in positional
notation in the base b, is written as:
anan-1an-2 … a1a0.a-1a-2 … a-m
and defined as
anbn + an-1bn-1 + … + a1b1 + a0b0 + a-1b-1 + a-2b-2 + … + a-mb-m
---------------------- (2.1)

The “a” in the above equation are called digits and may have one of “b” possible values.
Positional notation employs the radix point to separate the integer and fractional parts of
the number. Decimal arithmetic uses the decimal point while binary arithmetic uses
binary point.

3.2 Number Base Types

We shall be considering four bases – decimal, binary, octal, and hexadecimal

Decimal: base, b = 10, digits, a = {0,1,2,3,4,5,6,7,8,9}


Binary: base, b = 2 digits, a ={0,1}
Octal: base, b = 8 digits, a = {0,1,2,3,4,5,6,7}
Hexadecimal: base, b = 16 digits, a = {0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F}

Subscript is used to indicate the base of a number when necessary, for example, 123 10,
4568, 10112.
Decimal number system is the system we use in our day to day activities. It has the base 10
because it uses 10 digits (0-9). The successive positions to the left of the point represent units,
tens, hundreds, thousands and so on. For example:
5372: 5 thousand, 3 is hundred, 7 is tens and 2 is unit. Each position represents a specific power
of the base (10).

The binary number system is a number expressed in the base 2. It uses only digits 0 and 1 to
represent numbers. The binary system is the language the computer understands (known as
machine language) and is used in modern computers and computer-based devices.

The octal number system has the base 8 because it uses 8 digits (0-7). Octal numerals can be
made from binary numerals by grouping consecutive binary digits into groups of three (starting
from the right).
The hexadecimal number system has the base 16 because it uses 16 digits (0-7, A-F).
Octal numerals can be made from binary numerals by grouping consecutive binary digits into
groups of four (starting from the right).

Human beings normally work in decimal and computers in binary. The purpose of the
octal and hexadecimal systems is as an aid to human memory since they are shorter than
binary digits. Moreover, octal and hexadecimal numbers are more compact than binary
numbers (one octal digit equals three binary digits and one hexadecimal digit equals four
binary digits), they are used in computer texts and core-dumps (printout of part of the
computer’s memory). The advantage of binary number is in decoding electrical signals
that switch on (logical one), or switch off (logical zero) a device.

4.0 Summary
In this unit, you have learnt that number base system is a way of representing numbers in
different consistent forms. There are 4 major types of number base system, which are, decimal,
binary, octal decimal, and hexadecimal number system.

5.0 Self-Assessment
a) List the available number base systems
b) Differentiate between the digits used for each number base
6.0 Tutor Marked Assessment
a) Differentiate between the four number base systems
b) Why is decimal number base preferable for human beings and binary for computers?
7.0 Further Reading
https://www.tutorialspoint.com/computer_fundamentals/computer_number_system.htm

https://en.wikipedia.org/wiki/Binary_number
Unit 2 Number Base Conversion

1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Conversion of Integers from Decimal to Binary
3.2 Conversion of Integers from Decimal to Octal
3.3 Conversion of Integers from Decimal to Hexadecimal
3.4 Conversion of Integers from Other Bases to Decimal
3.5 Conversion from Binary Integer to Octal
3.6 Conversion from Binary Integer to Hexadecimal
3.7 Conversion from Octal to Binary
3.8 Conversion from Hexadecimal to Binary
3.9 Conversion from Hexadecimal to Octal
3.10 Conversion from Octal to Hexadecimal
3.11 Conversion of Binary Fractions to Decimal
3.12 Conversion of Decimal Fractions to Binary
3.13 Conversion of Binary Fractions to Octal/Hexadecimal
3.14 Conversion of Octal/Hexadecimal Fractions to Binary
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 References
8.0 Further Reading

Unit 2 Number Base Conversion

1.0 Introduction
Number Base conversion involver converting a number from a particular base to others. It
involves representation of numbers in different base. For a example, a number in base 10 can be
converted to base 2, base 8 and base 16 respectively.

2.0 Learning Outcomes


At the end of this unit, you should be able to:
i. Convert from binary to decimal, to octal decimal, and hexadecimal
ii. Convert from decimal, and hexadecimal
iii. Convert from Octal decimal to decimal, to binary, and hexadecimal
iv. Convert from hexadecimal to binary, to decimal and octal decimal
3.0Main Content
3.1 Conversion of Integers from Decimal to Binary
To convert a decimal integer to binary, divide the number successively by 2, and after each
division record the remainder which is either 1 or 0. The process is terminated only when the
result of the division is 0 remainder 1. The result is read from the most significant bit (the last
remainder) upwards.
For example: to convert 12310 to binary, we have

Numerator Divisor Quotient Remainder


123 2 61 1
61 2 30 1
30 2 15 0
15 2 7 1
7 2 3 1
3 2 1 1
1 2 0 1
Consequently, 12310 = 11110112
3.2 Conversion of Integers from Decimal to Octal
Just as in binary above except that the divisor is 8. The process of conversion ends when the
final result is 0 remainder R (where 0 ≤ R < 8).
For example: to convert 462910 to octal, we have

Numerator Divisor Quotient Remainder


4629 8 578 5
578 8 72 2
72 8 9 0
9 8 1 1
1 8 0 1
Therefore, 462910 = 110258
3.3 Conversion of Integers from Decimal to Hexadecimal
Just as in binary and octal. The divisor is 16. The remainder lies in the decimal range 0 to 15,
corresponding to the hexadecimal range 0 to F.
For example: to convert 5324110 to hexadecimal, we have

Numerator Divisor Quotient Remainder


53241 16 3327 9
3327 16 207 15 = F
207 16 12 15 = F
12 16 0 12 = C
Therefore, 5324110 = CFF916

3.4 Conversion of Integers from Other Bases to Decimal


Just express in the positional notation earlier stated above. But take note of the base.
For example, to convert 10101112, 64378, and 1AC16 to decimal
i. 10101112 = 1 × 26 + 0 × 25 + 1 × 24 + 0 × 23 + 1 × 22 + 1 × 21 + 1 × 20
= 1 × 64 + 0 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 1 × 1
= 64 + 0 + 16 + 0 + 4 + 2 + 1
= 8710
ii. 64378 = 6 × 8 3 + 4 × 82 + 3 × 8 1 + 7 × 80
= 6 × 512 + 4 × 64 + 3 × 8 + 7 × 1
= 3072 + 256 + 24 + 7
= 335910
iii. 1AC16 = 1 × 162 + 10 × 161 + 12 × 160
= 1 × 256 + 10 × 16 + 12 × 1
= 256 + 160 +12
= 42810

3.5 Conversion from Binary Integer to Octal


Form the bits into groups of three starting at the binary point and moving leftwards. Replace
each group of three bits with the corresponding octal digits (0 to 7).
For example, to convert 110010111012 to Octal

110010111012 = 11 001 011 101

11 = 1 × 21 + 1 × 20 = 3; 001 = 0 × 22 + 0 × 21 + 1 × 20 = 1; 011 =0 × 22 + 1 × 21 + 1 × 20 =
3; 101 = 1 × 22 + 0 × 21 + 1 × 20 = 5
Therefore 110010111012 = 31358
3.6 Conversion from Binary Integer to Hexadecimal
The binary number is formed into groups of four bits starting at the binary point. Each group is
replaced by a hexadecimal digit from 0 to 9, A, B, C, D, E, F.
For example, to convert 110010111012 to hexadecimal
110010111012 = 110 0101 1101
110 = 1 × 2 + 1 × 2 + 0 × 20 = 6;
2 1
0101 = 0 × 23 + 1 × 22 + 0 × 21 + 1 × 20 = 5;
1101 = 1 × 23 + 1 × 22 + 0 × 21 + 1 × 20 = 13 = D
Therefore 110010111012 = 65D16

3.7 Conversion from Octal to Binary


Converting an octal number into its binary equivalent requires the reverse procedure of
converting from binary to octal. Each octal digit is simply replaced by its binary equivalent.
For example, to convert 413578 to binary
413578 : converting each digit into binary, we have
4 = 100 1 = 001 3 = 011 5 = 101 7 = 111
Replacing each octal digit by its binary equivalent:
413578 = 100 001 011 101 111 = 1000010111011112

3.8 Conversion from Hexadecimal to Binary


Each hexadecimal digit is replaced by its 4-bit binary equivalent. For example, to convert
AB4C16 to binary
AB4C16: A = 10 =1010 B = 11 = 1011 4 = 0100 C = 12 = 1100
AB4C16 = 1010 1011 0100 1100 = 1010101101001100

3.9 Conversion from Hexadecimal to Octal


Conversion between hexadecimal and octal values is best performed via binary. For example, to
convert 12BC16 to octal
12BC16 = 1 0010 1011 1100
Regrouping into three bits from the right-hand side
12BC16 = 1 001 010 111 100
Converting each group into octal digit
12BC16 = 112748
3.10 Conversion from Octal to Hexadecimal
For example, to convert 413578 to hexadecimal
413578 = 4 =100 1 = 001 3 = 011 5 = 101 7 = 111
Regrouping into four bits
413578 = 100 0010 1110 1111
= 4 2 14 15
= 42EF16

3.11 Conversion of Binary Fractions to Decimal


Treat all fractions as integers scaled by an appropriate factor. For example, to convert 0.01101 2
to decimal.
By expressing in standard form, we have
0.011012 = 0 × 2-1 + 1 × 2-2 + 1 × 2-3 + 0 × 2-4 + 1 × 2-5

= 0 × + 1 × 1/ 22 + 1 ×1/ 23 + 0 × 1/24 + 1 × 1/25

=0 × +1× +1× +0× +1×

=0+ + +0+ = + + = = =0.40625

0.011012 = 0.4062510

3.12 Conversion of Decimal Fractions to Binary


To convert a decimal fraction to binary fraction, the decimal fraction is multiplied by two (2) and
the integer part noted. The integer, which is either 1 or 0, is then stripped from the number to
leave a fractional part. The new fraction is multiplied by two (2) and the integer part noted. The
process is carried out repeatedly until it ends or a sufficient degree of precision has been
achieved. The binary fraction is formed by reading the integer parts from the top to the bottom.
For example, to convert 0.687510 to binary

0.6875 × 2 = 1.3750
0.3750 × 2 = 0.7500
0.7500 × 2 = 1.5000
0.5000 × 2 = 1.0000

0.687510 = 0.10112
We can convert from decimal fractions to octal or hexadecimal fractions by using the same
algorithms used for binary conversions. We only need to change the base (that is: 2, 8, 16).
3.13 Conversion of Binary Fractions to Octal/Hexadecimal
Split the binary digits into groups of three (four for hexadecimal), start grouping bits at the
binary point and move to the right. Any group of digits remaining on the right containing fewer
than three (four for hexadecimal) bit must be made up to three (four for hexadecimal) bit by the
addition of zeros to the right of the least significant bit.
For example, to convert 0.101011002 and 0.101011112 to octal
0.101011002 = 0. 101 011 00(0)2 = 0.5308
0.101011112 = 0. 101 011 11(0)2 = 0.5368
To convert to hexadecimal
0.101011002 = 0. 1010 1100 = 0.AC16
0.1010110012 = 0. 1010 1100 1(000) =0.AC816
3.14 Conversion of Octal/Hexadecimal Fractions to Binary
0.4568 = 0. 100 101 110 = 0.1001011102
0.ABC16 = 0. 1010 1011 1100 = 0.1010101111002
4.0Summary
In this unit, you have learnt conversion of numbers from one base to another, either an integer or
a fraction.
5.0Self-Assessment
Convert the following:
110011002 = ?10, ?8, ?16
4567810 = ?2, ?10, ?8, ?16
6.0Tutor Marked Assessment
Convert
I. 553.35510 = ?2, ?8, and ?16
II. A0716 = ?10, ?8 and ?2

7.0Further Reading
https://code.tutsplus.com/articles/number-systems-an-introduction-to-binary-hexadecimal-and-
more--active-10848
https://www.talentsprint.com/blog/2018/01/number-system-i-what-is-number-syste.html
https://www.varsitytutors.com/hotmath/hotmath_help/topics/number-systems

Module 3 Boolean Algebra and Karnaugh Map


Unit 1 Boolean Algebra, Fundamentals of Truth tables and Precedence
Unit 2 De-Morgan’s Theorem and reducing complex Boolean functions
Unit 3 Karnaugh Map and Minimization of Expressions

Unit 1 Boolean Algebra, Fundamentals of Truth tables and Precedence


1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Algebra
3.2 Polynomials
3.3 Boolean Algebra
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 References
8.0 Further Reading

Unit 1 Boolean Algebra, Fundamentals of Truth tables and Precedence


1.0 Introduction
This unit covers Boolean Algebra and its operators as well as truth tables. The Boolean operator
are the AND operator represented with dot (.), the OR operator represented with plus (+) and the
NOT operator which is an inverter. Boolean functions can be represented with truth tables.
Truth tables can be used in analysis arguments and also to reduce Boolean functions.

2.0 Learning Outcomes


At the end of this unit, you should be able to:
i. differentiate between boolean operators and
ii. reduce boolean functions
3.0 Main Content
3.1 Algebra
Algebra means reunion on broken parts. It is the study of mathematical symbols and rules for
manipulating the symbols. Algebra can be regarded as elementary, abstract or modern depending
on the level or field of study.

Algebra has computations similar to arithmetic but with letters standing for numbers which
allows proofs of properties that are true regardless of the numbers involved. For example,
quadratic equation: ax2 + bx + c = 0 where a, b, c can be any number (a≠0). Algebra is used in
many studies, for example, elementary algebra, linear algebra, Boolean algebra, and so on.
3.2 Polynomials
A polynomial involves operations of addition, subtraction, multiplication, and non-negative
integer exponents of terms consisting of variables and coefficients. For example, x2 + 2x − 3 is a
polynomial in the single variable x. Polynomial can be rewritten using commutative, associative
and distributive laws.
An important part of algebra is the factorization of polynomials by expressing a given
polynomial as a product of other polynomials that cannot be factored any further. Another
important part of algebra is computation of polynomial greatest common divisors. x2 + 2x − 3 can
be factored as (x − 1)(x + 3).

3.3 Boolean Algebra


Boolean algebra is the branch of algebra in which the values of the variables are true values
denoted by 1 and 0 or true and false respectively.

Boolean algebra can be used to describe logic circuit; it is also use to reduce complexity of
digital circuits by simplifying the logic circuits. Boolean algebra is also referred to as Boolean
logic. It was developed by George Boole sometime on the 1840s and is greatly used in
computations and in computer operations. The name Boolean comes from the name of the
author.

Boolean algebra is a logical calculus of truth values. It somewhat resembles the arithmetic
algebra of real numbers but with a difference in its operators and operations. Boolean operations
involve the set {0, 1}, that is, the numbers 0 and 1. Zero [0] represents “false” or “off” and One
[1] represents “true” or “on”.

1 – True, on
0 – False, off

This has proved useful in programming computer devices, in the selection of actions based on
conditions set.

Basic Boolean operations

1. AND
The AND operator is represented by a period or dot in-between the two operands e.g
- X .Y

The Boolean multiplication operator is known as the AND function in the logic domain;
the function evaluates to 1 only if both the independent variables have the value 1.

2. OR
The OR operator is represented by an addition sign. Here the operation + is different from
that defined in normal arithmetic algebra of numbers. E.g. X+Y
The + operator is known as the OR function in the logic domain; the function has a value
of 1 if either or both of the independent variables has the value of 1.
3. NOT
The NOT operator is represented by X' or X̅.
This operator negates whatever value is contained in or assigned to X. It changes its value
to the opposite value. For instance, if the value contained in X is 1, X' gives 0 as the
result and if the value stored in X is 0, X' gives 1 as the result. In some texts, NOT may
be represented as X̅

To better understand these operations, truth table is presented for the result of any of the
operations on any two variables.

Truth Tables

A truth table is a mathematical table used in logic to compute the functional values of


logical expressions on each of their functional arguments. It is specifically in connection with
Boolean algebra and Boolean functions. Truth tables can be used to tell if a proposition
expression is logically valid. In a truth table, the output is completely dependent on the input. It
is composed of a column for each input entry and another column the corresponding output.
Each row of the truth table therefore contains one possible configuration of the input variables
(for instance, X=true Y=false), and the result of the operation for those values.

Applications of truth table

1. The truth table can be used in analyzing arguments.


2. It is used to reduce basic Boolean operations in computing
3. It is used to test the validity of statements. In validating statements, the following three
steps can be followed:
a. Represent each premise (represented as inputs) with a symbol (a variable).
b. Represent the conclusion (represented as the final result) with a symbol (a variable).
c. Draw a truth table with columns for each premise (input) and a column for the
conclusion (result).

Truth tables are a means of representing the results of a logic function using a table. They are
constructed by defining all possible combinations of the inputs to a function in the Boolean
algebra, and then calculating the output for each combination in turn. The basic truth table shows
the various operators and the result of their operations involving two variables only. More
complex truth tables can be built from the knowledge of the foundational truth table. The number
of input combinations in a Boolean function is determined by the number of variables in the
function and this is computed using the formula .
Number of input combinations = . Where n is number of variable(s).

For example, a function with two variables has an input combination of =4. Another with

three variables has =8 input combinations, and so on.

AND

X Y X.Y
0 0 0
0 1 0
1 0 0
1 1 0

OR

X Y X+Y
0 0 0
0 1 1
1 0 1
1 1 1

NOT

X X'
0 1
1 0

The NOT operation is a unary operator; it accepts only one input.

Example:
• Draw a truth table for A+BC. • Draw a truth table for AB+BC.
A B C BC A+BC A B C AB BC AB+BC
0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 1 0 0 0
0 1 0 0 0 0 1 0 0 0 0
0 1 1 1 1 0 1 1 0 1 1
1 0 0 0 1 1 0 0 0 0 1
1 0 1 0 1 1 0 1 0 0 0
1 1 0 0 1 1 1 0 1 0 1
1 1 1 1 1 1 1 1 1 1 1
• Draw a truth table for A(B+D).
A B D B+D A(B+D)
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 1 0
1 0 0 0 0
1 0 1 1 1
1 1 0 1 1
1 1 1 1 1

J= f(A,B,C) = A +
A B C A A +
0 0 0 1 1 1 0 1 1
0 0 1 1 1 0 0 0 0
0 1 0 1 0 1 0 0 0
0 1 1 1 0 0 0 0 0
1 0 0 0 1 1 1 0 1
1 0 1 0 1 0 0 0 0
1 1 0 0 0 1 0 0 0
1 1 1 0 0 0 0 0 0

4.0 Summary:
In this unit, you have learnt the operators of Boolean algebra, which are the AND operator
represented with dot (.), the OR operator represented with plus (+) and the NOT operator which
is an inverter. The unit also shows how Boolean operators and variables can be represented using
truth tables. The truth table can be applied to check the validity of a statement, express
arguments as well as reducing complex boolean operators

5.0 Self-Assessment:
Solve the following Boolean functions
a. J= f(A,B,C) = B + B + BC + A
b. b. Z = f(A,B,C) = B + B + BC + A

6.0 Tutor Marked Assessment


a) Draw a truth table for (A+B)(A+C).
b) Draw a truth table for W(X+Y)Z.

7.0 Further Reading


https://en.wikipedia.org/wiki/Algebra
https://en.wikipedia.org/wiki/Polynomial#Etymology
http://www.mee.tcd.ie/~bfoley/2e6/digital/L3-2E6-Digital.docx
8.0 References
Gupta A, Arora S. Industrial automation and robotics. Laxmi Publications; 2009.
https://books.google.com.ng/books?
hl=en&lr=&id=Y7rgCP7iC18C&oi=fnd&pg=PA1&dq=Industrial+Automation+and+Robotics+a.
+K+Gupta+s.K.+Arora&ots=e4KP0Fl_g9&sig=5FeHKe3utUmUlfjaTLFQf-
RbkMY&redir_esc=y#v=onepage&q=Industrial%20Automation%20and%20Robotics%20a.%20K
%20Gupta%20s.K.%20Arora&f=false

Wassell I. J. Digital Electronics: Part I – Combinational and Sequential Logic


https://www.cl.cam.ac.uk › teaching › DigElec › Digital_Electronics_08_pdf
Unit 2 De-Morgan’s Theorem and Precedence
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Boolean Definition
3.2 Fundamental importance of Boolean algebra
3.3 Boolean Algebra
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 References
8.0 Further Reading

1.0 Introduction
The De-Morgan’s Theorem is used to solve different Boolean algebra expressions. It states that
the complement of the product of two or more variables is equal to the sum of the complements
of the variables. With de-morgan’s theorem, Boolean algebra can be simplified.
2.0 Learning Outcomes
At the end of this unit, you should be able to:
i. explain the axiomatic relationships using truth tables;
ii. state the order of precedence of a boolean expression; and
iii. list the fundamental importance of boolean algebra

3.0 Main Content


3.1 Boolean Definition

Based on Boolean definitions the following axiomatic relationships hold:

A1 Closure (a) (b)

A2 Identity (a) (b)

A3 Commutation (a) (b)

A4 Distribution (a) (b)

A5 Inverse(a) (b)

To check that the axioms conform to the definitions, Properties A1 are obvious while the following
truth tables verify A2:
A a+0 a a.1
0 0.1=0
0 0+0=0
1 1.1=1
1 1+0=1

A3 may also be verified by truth table:

A b a+b b+a a.b b.a


0 0 0+0=0 0+0=0 0.0=0 0.0=0
0 1 0+1=1 1+0=1 0.1=0 1.0=0
1 0 1+0=1 0+1=1 1.0=0 0.1=0
1 1 1+1=1 1+1=1 1.1=1 1.1=1

We next consider A4 (a)

a b c b+c a.(b+c) a.b a.c (a.b)+(a.c)


0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0
0 1 0 1 0 0 0 0
0 1 1 1 0 0 0 0
1 0 0 0 0 0 0 0
1 0 1 1 1 0 1 1
1 1 0 1 1 1 0 1
1 1 1 1 1 1 1 1

Axiom A4(b) may be similarly verified

a b c bc a+(bc) a+b a+c (a+b)(a+c)


0 0 0 0 0 0 0 0
0 0 1 0 0 0 1 0
0 1 0 0 0 1 0 0
0 1 1 1 1 1 1 1
1 0 0 0 1 1 1 1
1 0 1 0 1 1 1 1
1 1 0 0 1 1 1 1
1 1 1 1 1 1 1 1
Finally, the A5 relations are verified from

a a a+a a.a
0 1 1 0
1 0 1 0

An important feature of Boolean algebra is duality. The set of (b) axioms are said to be duals of
the (a) axioms, and vice versa, in that a (b) axiom can be formed from its (a) counterpart by
exchanging operators and identity elements ‘+’ to ‘.’ and ‘1’ to ‘0’. Thus for every theorem
derived from one particular set of axioms, one can construct a dual theorem based on the
corresponding set of dual axioms.

Any more complex functionality can be constructed from the three basic Boolean operators
(And, Or, and Not) by using DeMorgan’s Law:
I. The complement of a product is equal to the sum of complements

II. The complement of the sum is equal to the product of the complement

Precedence
Order of precedence also exists in Boolean algebra as it exists in other area of mathematics. This
order should be followed in Boolean computations. The Boolean operators defined above have
order precedence as defined here:

NOT operations have the highest precedence, followed by AND operations, followed by OR
operations.

Also, brackets can be used. Example : X.(Y + Z), X+(Y+Z), X.Y+(X+Y)

X.Y + Z and X.(Y + Z) are not the same function.

The brackets should be evaluated first to reduce the complexity of the Boolean operation.

Boolean operations are foundational tools used in building computers and electronic devices.

Consider this practical application of a Boolean operation:

“I will take an umbrella with me if it is raining or the weather forecast is bad”.

This statement functionally has two propositions which are:

a. It is raining; and
b. The weather forecast is bad.
Let “It is raining” be variable X , “The weather forecast is bad” be Y and the result (taking an
umbrella) be Z.

We can generate truth values in a truth table from this problem statement.

From the statement, if either of the conditions is true, an umbrella would be taken.

In functional terms we can be consider the truth value of the umbrella proposition as the output
or result of the truth values of the other two.

X Y (Bad weather) Z(Take umbrella)


(Raining)
False False false
False True true
True False true
True True true

Another practical application of Boolean logic is this:

“I will sweep the class only if the windows are opened and the class is empty”.

From this statement, we can get two propositions which are “Windows opened” and “Class
empty”. These two propositions are the variables X and Y respectively.

“Windows opened” – X

“Class empty” – Y

X (Windows Y (Class Z (Sweep)


opened) empty)
False false false
False true false
True false false
True true true

3.2 Fundamental importance of Boolean algebra

1. Boolean logic forms the basis for computation in modern binary computer systems.
2. They are used in the development of software, in selective control structures (if and
if...else statements).
3. They are used in the building electronic circuits. For any Boolean function you can
design an electronic circuit and vice versa.
4. A computer’s CPU is built up from various combinatorial circuits. A combinatorial
circuit is a system containing basic Boolean operations (AND, OR, NOT), some inputs,
and a set of outputs.

We consider the Boolean function in two variables A and B.

Z = f(A,B) = B+A

A B B A B+A
0 0 1 1 0 0 0
0 1 1 0 1 0 1
1 0 0 1 0 1 1
1 1 0 0 0 0 0

Reducing complex Boolean functions

Very complex Boolean functions may result and this can be simplified in two ways:

1. The Boolean algebra method


2. Karnaugh map (discussed in the next unit)

Boolean algebra method

The basic rules of Boolean algebra are logical in nature. These rules are followed in simplifying
any Boolean function. As stated in the axioms above, the rules are:

1. A+0=A
2. A+1=1
3. A.0=0
4. A.1=A
5. A+A=A
6. A+ =1
7. A.A=A
8. A. =0
9. A̅ =A (double bar)
10. A+AB= A(1+B)=A(1)=A
11.  A (B + C) = A B + A C
12. A + (B C) = (A + B) (A + C)

De-Morgan’s theorem

   
 

 For example:

1. Using the above laws, simplify the following expression:


Q = (A + B)(A + C)

AA + AC + AB + BC (law 11)
A + AC + AB + BC (law 7)
A(1 + C) + AB + BC (law 10)- (1 + C = 1)
A.1 + AB + BC
A(1 + B) + BC (1 + B = 1)
A.1 + BC (A.1 = A)
Q =A + BC
This implies that the expression: (A + B)(A + C) can be simplified to A + BC

2. Another example can be considered:


Simplify Z=(A+ + )(A+ C)

Z=AA+A C+A + C+A + C


Z=A+A C+A +A + C +0
Z=A(1+ C+ + )+ C
Z=A(1+ (C+1)+ )+ C
Z=A(1+ + )+ C
Z=A(1)+ C
Z=A+ C

4.0 Summary
De-Morgan’s Theorem states that the complement of the product of two or more variables is
equal to the sum of the complements of the variables. This unit covers, axiomatic relationship,
de-morgan’s theorem, truth table and its precedence.
5.0 Self-Assessment
1. List and explain the axiomatic relationships using truth tables
2. State the order of precedence available in any boolean expression
3. List 3 fundamental importance of boolean algebra
6.0 Tutor Marked Assessment
I. Using Boolean algebra techniques, simplify this expression:

7.0 Further Reading


https://www.allaboutcircuits.com/textbook/digital/chpt-7/demorgans-theorems/
https://www.daenotes.com/electronics/digital-electronics/de-morgan-theorems
8.0 References
Gupta A, Arora S. Industrial automation and robotics. Laxmi Publications; 2009.
https://books.google.com.ng/books?
hl=en&lr=&id=Y7rgCP7iC18C&oi=fnd&pg=PA1&dq=Industrial+Automation+and+Robotics+a.
+K+Gupta+s.K.+Arora&ots=e4KP0Fl_g9&sig=5FeHKe3utUmUlfjaTLFQf-
RbkMY&redir_esc=y#v=onepage&q=Industrial%20Automation%20and%20Robotics%20a.%20K
%20Gupta%20s.K.%20Arora&f=false

Wassell I. J. Digital Electronics: Part I – Combinational and Sequential Logic


https://www.cl.cam.ac.uk › teaching › DigElec › Digital_Electronics_08_pdf
Unit 3 Karnaugh Map and Minimization of Expressions
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Karnaugh Map
3.2 Grouping of K-Map
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 References
8.0 Further Reading

1.0 Introduction
The Karnaugh map popularly called k-map is a diagram that has a rectangular array of squares
each representing a different combination of the variables of a Boolean function. It is a way of
reducing boolean expression complexity. This unit covers karnaugh map with description on
how grouping can be done, labeling, and boolean function reduction.
2.0 Learning Outcomes
At the end of this unit, you should be able to:
i. define k-map;
ii. list the k-map grouping rules; and
iii. to reduce expressions

3.0 Main Content


3.1 Karnaugh Map
A Boolean expression can be reduced to its simplest form through some steps involved in
Karnaugh mapping. A Karnaugh map is a graphical method of Boolean logic expression
reduction.

At this point you have the capability to apply the theorems and laws of Boolean algebra to
simplify logic expressions to produce simpler Boolean functions. Simplifying a logic expression
using Boolean algebra, though not terribly complicated, is not always the most straightforward
process. There isn’t always a clear starting point for applying the various theorems and laws, nor
is there a definitive end in the process. The Karnaugh map is also known as Veitch diagram (K-
map for short). It is a tool to facilitate the simplification of Boolean algebra integrated circuit
expressions. The Karnaugh map reduces the need for extensive calculations by taking advantage
of human pattern-recognition.

The Karnaugh map was originally invented in 1952 by Edward W. Veitch. It was further
developed in 1953 by Maurice Karnaugh, a physicist at Bell Labs, to help simplify digital
electronic circuits. In a Karnaugh map the Boolean variables are transferred (generally from a
truth table) and ordered according to the principles of Gray code in which only one variable
changes in between squares. Once the table is generated and the output possibilities are
transcribed, the data is arranged into the largest even group possible and the minterm is
generated. The K-map is a more straight-forward process of reduction. In the reduction process
using a k-map, 0 represents the complement of the variable (e.g. ) and 1 represents the variable
itself (e.g. B).

3.2 Grouping in K-Map


The truth table for the Boolean function should be drawn with the result or output and the
karnaugh map drawn with the number of boxes being equal to the number of outputs =number of
input combinations=
Where n is the number of variables in the function.
In grouping in the K-map, we get the simplified sum-of-products logic expression.

The rules to be followed are:

1. Consider boxes with ones only. Boxes containing zeros would not be considered.
2. Group 1s in powers of 2. That is 2, 4, 8... ones.
3. Grouping can only be done side to side or top to bottom, not diagonally.
4. Using the same one in more than one group is permissible.
5. The target is to find the fewest number of groups.
6. The top row may wrap around to the bottom row to form a group.
0 1 1 0
1 0 0 1
1 0 0 1
0 1 1 0

Labelling a K-map

In labelling the Karnaugh map, we make use of the principle of the “gray code”.
Labelling a 2-input k-map

A 0 1
0 (00) B (01)
1 A (10) AB (11)

For the 2-input k-map, the values change from 0 to 1 along both axes.

Labelling a 3-input k-map

AB C 0 1

00 (000) C (001)
01 B (010) BC (011)
11 AB (110) ABC(111)
10 A (100) A C(101)

In the case of the 3-input k-map, we have A and B on one side if the map and C on the other side
of the map. Using gray code, we start with (00), keeping A constant and changing B, we
have B (01). Now, if we still keep A constant and change B, we will have (00) which
already exists in the map, so, the next thing to do is to keep B constant and then change A. With
this, we will have AB (11) next and then, A (10).

For minimization using the kmap, the value 0 in the truth table, corresponding to a variable is
taken as it’s complement. For instance, if the variable A has the value 0 in the truth table, it is
taken as to fill in the kmap.
It is important to note that in k-map, grey code and in general BCD, only one bit change at a
time. For example 0000 has 4 bit only 1 bit can change at a time.
Consider the k-map:
0 1
1 0

The diagonal 1s cannot be grouped together.


B 1

1 1
A 1 0
2

There are two possible groupings here.

Example 1: Given Z=AB+A


A B AB A Z= AB+A

0 0 1 1 0 0 0
0 1 1 0 0 0 0
1 0 0 1 0 1 1
1 1 0 0 1 0 1

Put the output Z into a k-map:

0 0

1 1

In k-map, the variable that remains constant across the group is retained. Since the variable B
varies in value (looking at column B̅ and B, the variable changed) and A remains constant, the
constant value across the group is A. A̅ is not used even though it is constant because the value is
0.

Z= AB+A
Z=A
Example 2

J= f(A,B,C) = A +

A B C A A +

0 0 0 1 1 1 0 1 1
0 0 1 1 1 0 0 0 0
0 1 0 1 0 1 0 0 0
0 1 1 1 0 0 0 0 0
1 0 0 0 1 1 1 0 1
1 0 1 0 1 0 0 0 0
1 1 0 0 0 1 0 0 0
1 1 1 0 0 0 0 0 0

B̅C̅ B̅C BC BC̅

A̅ 0
1 0 0
1 0 0
A 0

From the diagram, the value of A changes across the group and the value of B̅C̅ remains the
same.

J = AB̅C̅+A̅B̅C̅ = B̅C̅

An advantage of the k-map over the Boolean algebra method of function reduction is that the k-
map has a definite process which is followed unlike the boolean algebra method which may not
have a particular starting and ending point.
Example 3

Consider the already filled k-map below,

CD

AB C̅D̅ C̅D CD CD̅

0 0 0 1
B 1 1 0 1
AB 1 1 0 1

A 0 0 0 1

1 2

The final answer here after the grouping is derived by looking across the group and eliminating
the variable that changes in value.

For group 1, looking horizontally, D changes in value while C has a constant value of . So, D is
eliminated and retained. Looking vertically, A changes across the group while B remains
constant. So, A is eliminated and B retained.

For the group1, the answer is the AND of the retained variables after elimination and this is B .

We do the same for group 2. Our answer there is CD̅ since vertically across the group, both and
A and B change values.

After doing this for all the groups in the kmap, we then OR the individual results of each group.

So, for this k-map, we have B + CD̅ as the minimum expression.

4.0 Summary
This unit covers the boolean expression reduction using karnauph map, that is a diagram with a
rectangular array of squares each representing a different combination of the variables of a
Boolean function.

5.0 Self-Assessment
a) Reduce J= f(A,B,C) = A̅B + BC̅ + BC + AB̅C̅ using a k-map
b) Simplify the following Boolean functions:
ABC + ABC + ABC + ABC + ABC
A + B+
c) Consider a Boolean function represented by the truth table below and simplify the
expression using k-map
A B C F
0 0 0 1
0 0 1 1
0 1 0 1
0 1 1 0
1 0 0 0
1 0 1 1
1 1 0 0
1 1 1 0

6.0 Tutor Marked Assessment


A) Consider the truth table given below and simplify the expression using k-map
A B C F
0 0 0 1
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 0
1 1 1 0

7.0 Further Reading


https://www.allaboutcircuits.com/worksheets/boolean-algebra/
https://studylib.net/doc/6803447/2.2.1.a-k
https://www.allaboutcircuits.com/textbook/digital/chpt-8/karnaugh-maps-truth-tables-boolean-
expressions/
https://web.iit.edu/sites/web/files/departments/academic-affairs/academic-resource-center/pdfs/
kmaps.pdf
8.0 References
Gupta A, Arora S. Industrial automation and robotics. Laxmi Publications; 2009.
https://books.google.com.ng/books?
hl=en&lr=&id=Y7rgCP7iC18C&oi=fnd&pg=PA1&dq=Industrial+Automation+and+Robotics+a.
+K+Gupta+s.K.+Arora&ots=e4KP0Fl_g9&sig=5FeHKe3utUmUlfjaTLFQf-
RbkMY&redir_esc=y#v=onepage&q=Industrial%20Automation%20and%20Robotics%20a.%20K
%20Gupta%20s.K.%20Arora&f=false

Wassell I. J. Digital Electronics: Part I – Combinational and Sequential Logic


https://www.cl.cam.ac.uk › teaching › DigElec › Digital_Electronics_08_pdf
Module 4 Logic Gates

Unit 1 Basic Logic Gates

Unit 2 Combinatorial Logic Circuits

Unit 1 Basic Logic Gates Boolean

Unit 1 Basic Logic Gates

1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Basic Logic Gates
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading

1.0 Introduction
Logic gates are building blocks of digital circuit. Electronic circuits are built using the various
types of gate. The basic gates are AND, OR, and the NOT gate. Other advanced gated are
developed using combinations of the basic gates like NAND, NOR. EXOR, AND EXNOR
2.0 Learning Outcomes
At the end of this unit, you should able:
I. List at least five (5) types of gates
II. Mention the logic function associated with each gate
III. Draw the truth table associated with each gate
3.0 Main Content
3.1 Basic Logic Gates

Logic can be viewed as black boxes with binary input (independent variable) and binary output
(dependent variable). It also refers to both the study of modes of reasoning and the use of valid
reasoning. In the latter sense, logic is used in most intellectual activities. Logic in computer
science has emerged as a discipline and it has been extensively applied in the fields of Artificial
Intelligence, and Computer Science, and these fields provide a rich source of problems in formal
and informal logic.

Boolean logic, which has been considered as a fundamental part to computer hardware,
particularly, the system's arithmetic and logic structures, relating to operators AND, NOT, and
OR.
Logic gates

A logic gate is an elementary building block of a digital circuit. Complex electronic circuits are
built using the basic logic gates. At any given moment, every terminal of the logic gate is in one
of the two binary conditions low (0) or high (1), represented by different voltage levels.

There are 3 basic logic gates: AND, OR, NOT.

Other gates- NAND, NOR, XOR and XNOR are based on the 3 basic gates.

The AND gate

The AND gate is so called because, if 0 is called "false" and 1 is called "true," the gate acts in the
same way as the logical "and" operator. The following illustration and table show the circuit
symbol and logic combinations for an AND gate.

The output is "true" when both inputs are "true." Otherwise, the output is "false."
 

The OR gate
The OR gate gets its name from the fact that it behaves after the way of the logical "or." The
output is "true" if either or both of the inputs are "true." If both inputs are "false," then the output
is "false."

The NOT gate

A logical inverter, sometimes called a NOT gate to differentiate it from other types of electronic
inverter devices, has only one input. It reverses the logic state (i.e. its input).

As previously considered, the AND, OR and NOT gates’ actions correspond with the AND, OR
and NOT operators.
More complex functions can be constructed from the three basic gates by using DeMorgan’s
Law.

The NAND gate

The NAND gate operates as an AND gate followed by a NOT gate. It acts in the manner of the
logical operation "and" followed by negation. The output is "false" if both inputs are "true."
Otherwise, the output is "true". It finds the AND of two values and then finds the opposite of the
resulting value.

The NOR gate

The NOR gate is a combination of an OR gate followed by an inverter. Its output is "true" if both
inputs are "false." Otherwise, the output is "false". It finds the OR of two values and then finds
the complement of the resulting value.
The XOR gate

The XOR (exclusive-OR) gate acts in the same way as the logical "either/or." The output is


"true" if either, but not both, of the inputs are "true." The output is "false" if both inputs are
"false" or if both inputs are "true." Another way of looking at this circuit is to observe that the
output is 1 if the inputs are different, but 0 if the inputs are the same.

Z=  

XOR gate

A B Z

 0  0  0

 0 1 1

1  0 1

1 1 0 

The XNOR gate


The XNOR (exclusive-NOR) gate is a combination of an XOR gate followed by an inverter. Its
output is "true" if the inputs are the same, and"false" if the inputs are different. It performs the
operation of an XOR gate and then inverts the resulting value.

Z=  

XNOR gate

A B Z
 0 0  1
 0 1 0
1 0  0 
1 1 1

4.0 Summary
This unit covers logic reasoning, logic gates as well as the logic function and truth tables for
each of the gates. The basic gates are AND , OR and NOT gates, others are NOR, NAND,
EXOR, and EXNOR.

5.0 Self-Assessment
a. List at least five (5) types of gates
b. Mention the logic function associated with each gate
c. Draw the truth table associated with each gate
6.0 Tutor Marked Assessment

a. Draw the physical representation of the AND, OR, NOT and XNOR logic gates.

b. Draw the logic circuit and truth table for

I. Z= ABC,

II. W= (P.Q̅) (R+S̅)

Further Reading

https://whatis.techtarget.com/definition/logic-gate-AND-OR-XOR-NOT-NAND-NOR-and-
XNOR
https://www.electronics-tutorials.ws/logic/logic_1.html
http://www.ee.surrey.ac.uk/Projects/CAL/digital-logic/gatesfunc/

7.0 References
Gupta A, Arora S. Industrial automation and robotics. Laxmi Publications; 2009.
https://books.google.com.ng/books?
hl=en&lr=&id=Y7rgCP7iC18C&oi=fnd&pg=PA1&dq=Industrial+Automation+and+Robotics+a.
+K+Gupta+s.K.+Arora&ots=e4KP0Fl_g9&sig=5FeHKe3utUmUlfjaTLFQf-
RbkMY&redir_esc=y#v=onepage&q=Industrial%20Automation%20and%20Robotics%20a.%20K
%20Gupta%20s.K.%20Arora&f=false

Wassell I. J. Digital Electronics: Part I – Combinational and Sequential Logic


https://www.cl.cam.ac.uk › teaching › DigElec › Digital_Electronics_08_pdf
Unit 2 Combinatorial Logic Circuits

1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Combinatorial Logic Circuit
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading

1.0 Introduction
Different gates can be combined together to build digital circuits. As learnt in the previous
module, algebraic functions can be reduced using k-map or Boolean reduction. The reduced
logic can related to reduction in cost of building a circuit.
2.0 Learning Outcomes
At the end of this unit, you should be able to
i. Combine different gate to form a logic circuit.
ii. Draw the associated truth table for the logic circuit
3.0 Main Content
3.1 Combinatorial Logic Circuits

With the combinations of several logic gates, complex operations can be performed by electronic
devices. Arrays (arrangement) of logic gates are found in digital integrated circuits (ICs).

As IC technology advances, the required physical volume for each individual logic gate
decreases and digital devices of the same or smaller size become capable of performing much-
more-complicated operations at an increased speed.

Combination of gates

A B C A̅ A̅BC

0 0 0 1 0
0 0 1 1 0

0 1 0 1 0

0 1 1 1 1

1 0 0 0 0

1 0 1 0 0

1 1 0 0 0

1 1 1 0 0

A goes into the NOT gate and is inverted, after this, it goes into the AND gate along with the
variables B and C. The final output at the output terminal of the AND gate is BC. More
complex circuitry can be developed using the symbolic representation in this same manner.

Q= +BC

A B C D E Q

0 0 0 1 0 1
0 0 1 1 0 1

0 1 0 0 0 0

0 1 1 0 1 1
1 0 0 0 0 0

1 0 1 0 0 0

1 1 0 0 0 0

1 1 1 0 0 0
Basically there are 3 variables A, B, and C, do not be confused by the presence of D, E.
Variables A and B goes into a NOR gate, B goes into AND gate along variable C. The B is
reused from the earlier defined one so as not to waste resources or have repetition. The output of
the Nor and And gates serves as input to the Or gate.

Q=A̅B +B

Q= (ABC)(DE)

4.0 Summary
In this chapter, you have learnt how to combine different gates together
5.0 Self-Assessment
i. Combine gates together to draw 4 logic circuits, combining at least 3 gates together in
each.
ii. Draw the logic gate and associated logic circuits for the following functions
A X = A̅BC̅D + FG
B Z= ABC + CDE + ACF
6.0 Tutor Marked Assessment
Write out the logic function of the gates below:
i)

ii)

7.0 Further Reading

https://whatis.techtarget.com/definition/logic-gate-AND-OR-XOR-NOT-NAND-NOR-and-
XNOR
https://www.electronics-tutorials.ws/logic/logic_1.html
http://www.ee.surrey.ac.uk/Projects/CAL/digital-logic/gatesfunc/
8.0 References
Gupta A, Arora S. Industrial automation and robotics. Laxmi Publications; 2009.
https://books.google.com.ng/books?
hl=en&lr=&id=Y7rgCP7iC18C&oi=fnd&pg=PA1&dq=Industrial+Automation+and+Robotics+a.
+K+Gupta+s.K.+Arora&ots=e4KP0Fl_g9&sig=5FeHKe3utUmUlfjaTLFQf-
RbkMY&redir_esc=y#v=onepage&q=Industrial%20Automation%20and%20Robotics%20a.%20K
%20Gupta%20s.K.%20Arora&f=false

Wassell I. J. Digital Electronics: Part I – Combinational and Sequential Logic


https://www.cl.cam.ac.uk › teaching › DigElec › Digital_Electronics_08_pdf
Module 5 Computer Programming Languages
Unit 1 Program and Evolution of Languages
Unit 2 Classification and Generations of Programming Languages

Unit 1 Program and Evolution of Languages


1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Program
3.2 Evolution of Programming Languages
3.3 Features of Good Programming Languages
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading

1.0 Introduction
A program in computing can be regarded as set of instructions. These instructions are used in
executing any given task. The programming languages are known to be the medium through
which human beings communicate with the computer, different languages have evolved over the
years and each has its own target and features. The features of the language are used to measure
its strength and well it will be accepted by the public.

2.0 Learning Outcomes


At the end of this unit, you should be able to:
I. List ten (10) different programming languages and their authors
II. State the features of each programming languages
III. State the features of a good programming language
3.0 Main Content
3.1 Program

A program is a list of instructions in a logical sequence which are needed to be performed in


order to accomplish a given task or solve a given problem on a computer. The process by which
a user specifies to the computer in a particular programming language what s/he wants the
computer to do is referred to as programming. Since the computer cannot think on its own, it is
the programmer that will give the detailed steps, as well as the sequence in which steps are to
be taken, in solving the problem.
Programming Language

Programming Language is a set of specialized notations for communicating with the computer
system.

3.2 Evolution of Programming Languages

Hundreds of programming languages have been developed in the last fifty years. Many of them
remained in the labs and the ones, which have good and more general features, got recognized.
Every language that is introduced comes with features upon which its success is judged. In the
initial years, languages were developed for specific purposes, which limited their scope.
However, as the computer revolution spread affecting common man, the language needed to be
molded to suit all kinds of applications. Every new language inherited certain features from
existing languages and added its own features. The chronology of developments in programming
languages is given below:-

I. The first computer program was made by Lady Lovelace Ada Augusta in 1843 for an
analytical engine application.

II. Konrad Zuse, a German, started a language design project in 1943. He finally developed
plankalkul, programming calculus, in 1945. The language supported bit, integer, floating-
point scalar data, arrays, and record data structures.

III. In early 1950s, Grace Hopper and his team developed A-O Language. During this period,
assembly language was introduced.

IV. The major milestone was achieved when John Backus developed FORTRAN (Formula
Translator) in 1957. The FORTRAN data is oriented around numerical calculations. It was a
major step towards development of full-fledged programming language including control
structures, conditional loops, and input and output statements

V. ALGOL was developed by GAMM (German Society of Applied mathematics) and ACM
(Association of Computing Machinery) in 1960

VI. COBOL (Common business oriented Languages) was developed for business purpose by US
department of defense in 1960.

VII. BASIC (beginner’s All- purpose symbolic instruction code) was developed by John Kemeny
and Thomas Kurtz in 1960’s

VIII. PASCAL was developed by Niklaws around 1970’s. PASCAL was named after French
philosopher, Blaise pascal.
IX. In early 70’s Dennis Ritchie developed C at Bell laboratories using some of the B languages.
Features.

X. C++ was developed by Bjarne Stroustrup in early 1980s extending features of C and
introducing object –oriented features

XI. Java, originally called Oaks, was developed by Sun Microsystems of USA in 1991 as general
purpose language. Java was designed for the development of software for consumer
electronic devices. It was a simple, reliable, portable and powerful language.

A language may be extremely useful for one type of applications. For example, a language such
as cobol, is useful for business application but not for embedded software. On the basis of
application, programming languages can be broadly classified as =

BUSINESS = COBOL
SCIENTIFY = FORTRAN
INTERNET = JAVA
SYSTEM = C, C++
Artificial intelligence (AI): LISP and PROLOG

3.3 Features of Good Programming Languages

The features of one programming language may differ from the other. One can be easier and
simple while another can be difficult and complex. The program written for a specific task may
have few lines in one language while many lines in another. The success and strength of a
programming language is judge with respect to standard features. To begin the language
selection process, it is important to establish some criteria that makes a language good. A good
language choice should provide a path into the future in a number of important ways.

(a) Ease of use:- this is the most important factor in choosing a language. The language should
be easy in writing codes for the programs and executing them. The ease and clarity of a language
depends upon its syntax. It should be capable enough to provide clear, simple, and unified set of
concepts. The vocabulary of the language should resemble English (or some other natural
language). Any concept that cannot easily be explained to amateurs should not be included in the
language. Part-time programmers do not want to struggle with difficult concepts; they just want
to get a job done quickly and easily.

(b) Portability:- the language should support the construction of code in a way that it could be
distributed across multiple platforms (operating systems). Computer languages should be
independent of any particular hardware or operating systems, that is, programs written on one
platform should be able to be tested or transferred to any other computer or platform and there it
should perform accurately.
(c) Reliability:- the language should support construction of components that can be expected to
perform their intended functions in a satisfactory manner throughout its lifetime. Reliability is
concerned with making a system failure free, and thus is concerned with all possible errors. The
language should have the support of error detection as well as prevention. It should make some
kinds of errors impossible for example, some errors can be prevented by a strict syntax checking.
Apart from prevention, the language should also be able to detect and report errors in the
program. For example errors such as arithmetic overflow and assertions should be detected
properly and reported to the programmers immediately so that the error can be rectified. The
language should provide reliability by supporting explicit mechanism for dealing with problems
that are detected when the system is in operation.

(d) Safety:- safety is concerned with the extent to which the language supports the construction
of safety critical systems, yielding systems that are fault tolerant, fail-safe or robust in the face of
systemic failures. The system must always do what is expected and be able to recover from any
situation that might lead to a mishap or actual system hazard. Thus, safety tries to ensure that
any failures that occurs result in minor consequences, and even potentially dangerous failures are
handled in a fail-safe fashion. Language can facilitate this through such features as built-in
consistency checking and exceptional handling.

(e) Performance: In some applications, performance is a big issue. By performance, we mean


that the language should not only be capable of interacting with the end users, but also with the
hardware. The language should also support software engineering mechanism, discouraging or
prohibiting poor practices and supporting maintenance activities. This is the main reason why C
language is used for developing operating systems.

(f) Cost: Cost component is a primary concern before deploying a language at a commercial
level. It includes several costs such as; program execution and translation cost, program
creation, testing and use, program maintenance

(g) Compact Code: A good language should also promote compact coding, that is, the intended
operations should be coded in a minimum number of lines. Even if the language is powerful,
and is not able to perform the task in small amount of codes, then it is bound to be unpopular.
This is the main reason of C language’s popularity over other languages in developing complex
applications. Larger codes require more testing and developing time, thereby increasing the cost
of developing an application.

(h) Maintainability: creating an application is not the end of the system development. It
should be maintained regularly so that it can be modified to satisfy new requirement or to correct
deficiencies. Maintainability is actually facilitated by most of the languages, which makes it
easier to understand and then change the software. Maintainability is closely linked with the
structure of the code. If the original code were written in an organized way (Structural
Programming) then it would be easy to modify or add new changes.
(i) Provides Interface To Other Language:- From the perspective of the language, interface to
other language refers to the extent to which the selected language supports interfacing feature to
other languages. This type of support can have a significant impact on the reliability of the data,
which is exchanged between applications, developed with different languages. In case of data
exchange between units of different languages, without specific language support, no checking
may be done on the data or even on their existence. Hence, the potential for unreliability
becomes high-modern day languages have come a long way and most of the languages provide
interface support for other languages.

(j) Concurrency Support: Concurrency support refers to the extent to which inherent language
supports the construction of code with multiple threads of control (also known as parallel
processing). For some applications, multiple threads of control are very useful or even
necessary. This is particularly true for real time systems and those running on architecture with
multiple processors. It can also provide the programmer with more control over its
implementation. Other features include Reusability and Standardization.

4.0 Summary
In this unit, you have learnt that:
i. List five (5) different programming languages and their authors
ii. State the five (5) features of each programming languages
iii. State the features of a good programming language
5.0 Self-Assessment
a) What is a program?
b) Discuss the evolution of programming language from Ada Lovelace to Java.
6.0 Tutor Marked Assessment
a) An action is to occur at a particular time, is this program? True/False. Justify your
answer.
b) If Cobol is good for business, identity the applications for Java, Pascal, C++, and Basic
c) List and Explain five (5) features of a good programming language
7.0 Further Reading
https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading13.htm
https://en.wikibooks.org/wiki/Introduction_to_Computer_Information_Systems/
Program_Development
http://interactivepython.org/runestone/static/CS152f17/GeneralIntro/Glossary.html
https://pages.uoregon.edu/moursund/Books/PS-Expertise/chapter-9.htm
Unit 2 Classification and Generations of Programming Languages
1.0 Introduction
2.0 Learning Outcomes
3.0 Main Content
3.1 Classification of Programming Languages
3.2 Generations of Programming Language
4.0 Summary
5.0 Self-Assessment
6.0 Tutor Marked Assessment
7.0 Further Reading

1.0 Introduction
The computer language is machine language that is 0’s and 1’s. Communication with the
computer is via machine language. This language is cumbersome and not easy to remember,
which lead to the development of assembly language and high-level language that are more
English like in nature. The classification and generation of computers is based on machine
language, assembly language and High-level language.
2.0 Learning Outcomes
At the end of this unit, you should be able to:
I. Mention programming languages according to their generations
II. Differentiate between the generations of languages
3.0 Main Content
3.1 Classification of Programming Languages

Computers understand only one language and that is binary language (the language of 0’s and
1’s) also known as machine language. In the initial years of computer programming, all the
instructions were given in binary form only. Although these programs were easily understood by
the computer, it proved too difficult for a human being to remember all the instructions in the
form of 0’s and 1’s. Therefore, the computer remained a mystery to a common man until other
languages such as assembly and high –level languages were developed which were easier to
learn and understood. These languages use commands that have some degree of similarity with
English (such as if else, exit)

Programming languages can be grouped into three major categories: machine language,
assembly (low-level) language and high–level languages.

1. Machine language: Machine language is the native language of computers. It uses only 0’s
and 1’s to represent data and the instructions written in this language, consists of series of 0’s
and 1’s. Machine language is the only language understood by the computer. The machine
language is peculiar to each type of computer.

2. Assembly (low-level) language: Assembly language correspondences symbolic instructions


and executable machine codes and was created to use letters instead of 0’s and 1’s to run a
machine. It is called low-level because of its closeness to the machine language.

3. High-level language: these languages are written using a set of words and symbols following
some rules similar to a natural language such as English. The programs written in high –level
languages are known as source programs and these programs are converted into machine-
readable from by using compilers or interpreters

3.2 Generations of Programming Language

Since early 1950s, programming languages have evolved tremendously. This evolution has
resulted in the development of hundreds of different languages. With each passing year, the
languages become user-friendly and more powerful than their predecessors. We can illustrate the
development of all the language in five generations.

FIRST GENERATION:- MACHINE LANGUAGE

The first language was binary, also known as machine language, which was used in the earliest
computers and machines. We know that computers are digital devices, which have only two
states, ON and OFF (1 and 0). Hence, computers can understand only two binary codes, 1 and 0.
Therefore, every instruction and data should be written using 0’s and 1’s. Machine language is
also known as the computer’s ‘native’ language because this system of codes is directly
understood by the computer.

Advantages of machine language: Even though machine language is not a human friendly
language, it offers certain advantages, as listed below:

i. Translation free: Machine language is the only language that computer can directly execute
without the need for conversion. In fact, it is the only language that computer is able to
understand. Even an application using high level language, has to be converted into machine-
readable form so that the computer can understand the instruction.

ii. High speed: Since no conversion is needed, the application developed using machine
languages are extremely fast. It is usually used for complex application such as space control
system, nuclear reactors, and chemical processing.

Disadvantages of Machines Languages: There are many disadvantages in using machines


languages to develop program.
i. machine dependent : Every computer type differs from the other, based on its architecture.
Hence, an application developed for a particular type of computer may not run on the other type
of the computer. This may prove to be both costly as well as difficult for the organization. E.g.
program written for one machine, say IBM 370 cannot be executed by another machine say HP
530.

ii. Complex languages: Machine language is very difficult to read and write. Since all the data
and instruction must be converted to binary code, it is almost impossible to remember the
instruction. A programmer must specify each operation, and the specific location for each piece
of data and instruction to be stored. It means that a programmer partially needs to be a hardware
expert to have proper control over the machines languages.

iii. Error prone: Since the programmer has to remember all the opcodes (Operation Codes) and
the memory location, it is bound to be error prone. It takes a super human effort to keep track of
the logic of the problems and, therefore, result in frequent programming errors.

iv. Tedious:-Machine language poses real problems while modifying and correcting a program.
Sometimes the programming becomes too complex to modify and the programmer has to re-
program the entire logic again. Therefore, it is very tedious and time consuming, and since time
is a precious commodity, programming using the machine languages tends to be costly.

Due to its overwhelming limitations, machine languages is rarely used nowadays

SECOND GENERATION: Assembly (low-level) languages,

The complexities of machines languages led to the search of another language and the assembly
language was developed. It was developed in the early 1950s and the main developer was IBM.
Assembly language allows the programmers to interact directly with the hardware. This language
assigns mnemonic codes to each machine language instruction to make it easier to remember or
write. It allows better human- readable method of writing program as compared to writing in
binary bit patterns.

Unlike other programming languages, assembly language is not a single language but a group of
languages. Each processor family (and sometimes individual processors within a processor
family) has its own assembly languages.

An assembly language provides mnemonic instructions, usually three letters long, corresponding
to each machine instruction. The letters are usually abbreviated indicating what the instruction
does: For example, ADD is used to perform an addition operation, MUL for multiplication, and
so on. Assembly languages make it easier for humans to remember how to write instruction to
the computer, but an assembly language is still a representation of the computer’s native
instruction set. Since each type of computer uses a different native instruction set, assembly
languages cannot be standardized from one machine to another, and instructions from one
computer cannot be expected to work on another.

Assembler:

Assembly language is nothing more than a symbolic representation of machine code, which also
allows symbolic designation of memory location. However, no matter how close assembly
language is to machines codes, the computer still cannot understand it. The assembly language
programs must be translated into machine codes by a separate program called Assemblers. The
assembler program recognizes the character strings that make up the symbolic names of the
various machine operations, and substitute the required machine code for each instruction. At the
same time, it also calculates the required address in memory for each symbolic name of a
memory location, and substitutes those addresses for the names resulting in a machine language
program that can run on its own at any time. An assembler converts the assembly codes into
binary codes and then it assembles the machine understandable code into the main memory of
the computer, making it ready for execution.

Machine Language program (object code)

Assembly program Assembler Error messages, listing, etc

Figure 5.1: The working of an Assembler.

The original assembly language program is also known as the source code, while the final
machine language program is designated the object code. If an assembly language program needs
to be changed or corrected, it is necessary to make the changes to the source code and then re-
assemble it to create a new object program. The functions of an assembler are given below:

a. It allows the programmer to the use mnemonics while writing source code programs,
which are easier to read and follow.
b. It allows the variable to be represented by symbolic names, not as memory locations.
c. It translates mnemonic operations codes to machine code and corresponding register
addresses to system addresses.
d. It checks the syntax of the assembly program and generates diagnostic messages on
syntax errors.
e. It assembles all the instructions in the main memory for execution.
f. In case of large assembly programs, it also provides linking facility among the
subroutines.
g. It facilitates the generations of output on required output medium.

Advantages of Assembly Language: the advantages of using assembly language to develop a


program are:
i. Easy to Understand and Use: Assembly language uses mnemonics instead of using numerical
opcodes and memory locations used in machine language. Hence, the programs written in
assembly language are much easier to understand and use when compared with machine
language. Being a more user-friendly language as compared to machine language, assembly
programs are easier to modify.

ii. Less Error Prone: Since mnemonic codes and symbolic addresses are used, the programmer
did not have to keep track of the storage locations of the information and instruction. Hence,
there are bounds to be less error while writing an assembly language program. Even in case of
errors, assembly programs provide better facility to locate and correct them as compared to
machine language programs.

iii. Efficiency: Assembly programs can run much faster and use less memory and other
resources than a similar, program written in a high-level language. Speed increment of 2 to 20
times faster is common, and occasionally, an increase of hundreds of times faster is also possible.
Apart from speed, assembly programs are also memory efficient, that is, the memory
requirement of a program (size of code) is usually smaller than a similar program written in high-
level language.

iv. More Control on Hardware: Assembly language also gives direct access to key machine
features essential for implementing certain kinds of low-level routines such as an operating
system kernel or micro-kernel, device drivers, and machine control.

Disadvantages of Assembly Language:

The disadvantages in using assembly to develop a program are:-

i. Machine dependent: Different computer architectures have own machine and assembly
languages, which means that programs written in these languages are not portable to other
(incompatible systems). If an assembly program is to be shifted to a different type of computer, it
has to be modified to suit the new environment.

ii. Harder to Learn: The source code for an assembly language is cryptic (has hidden meaning)
and in a very low machine specific form. Being a machine-dependent language, every type of
computer architecture requires different assembly languages, making it nearly impossible for a
programmer to remember and understand every dialect of assembly. More skilled and highly
trained programmers, who know all about the logical structure of the computer, can only create
applications using assembly language.

iii. Slow Development Time: Even with highly skilled programmers, assembly generated
application are slower to develop as compared to high-level language based applications. In case
of assembly language, several lines of assembly code are required for a line of high-level code
the development time can be 10 to 100 times as compared to high-level language generated
application.

iv. Less Efficient: A program written in assembly language is less efficient as compared to an
equivalent machine language program because every assembly instruction has to be converted in
to machine. Therefore, the execution of assembly language program takes more time than it
equivalent machine language program. Moreover, before executing an assembly program, the
assembler has to be loaded in the computer’s memory for translation and it occupies a sizeable
memory of computer.

v. Not Standardized: Assembly language cannot be standardized because each type of computer
has a different instruction set and, therefore, a different assembly language.

vi. No Support for Modern Software Engineering Technology:-Assembly languages provide


no inherent support for software engineering technology. They work with just machine-level
specifics, not with abstractions. Assembly language does not provide inherent support for
safety-critical systems. It provides very little opportunity for reuse and there is no object-oriented
programming support. There is also no specific support for distributed systems. The tools
available for working with assembly language are typically very low-level tools.

THIRD GENERATION: HIGH-LEVEL LANGUAGE

During 1960s, computers started to gain popularity and it became necessary to develop
languages that were more like natural languages such as English so that a common user could
use the computer sufficiently. Since assembly language required deep knowledge of computer
architecture, it demanded programming as well as hardware skills to use computers. Due to
computer’s widespread usage, early 1960s saw the emergency of the third programming
languages (3GL) languages such as COBOL, FORTRAN, BASIC, and C are examples of 3GLs
and are considered high-level languages.

Using a high-level language, programs are written in a sequence of statements that impersonates
human thinking to solve a problem. For example, the following BASIC code snippet will
calculate the sum of two numbers.

LET X = 10
LET Y = 20
LET sum = X + Y
PRINT SUM

The first two statement store 10 in variable X ( memory location name) and 20 in variable Y,
respectively the third statement again creates a variable named sum, which will store the
summation of X and Y value.
Finally, the output is printed, that is the value store in sum is printed on the screen. From this
simple example, it is evident that even a novice user can follow the logic of the program.

TRANSLATING HIGH-LEVEL LANGUAGE TO MACHINE LANGUAGE

Since computers understand only machine language, it is necessary to convert the high-level
programs into machine language codes. This is achieved by using language translators or
language processors, generally known as compilers, interpreters or other routines that accepts
statements in one language and produces equivalent statements in another language.

A. COMPILER: A compiler is a kind of translator that translates a program into another


program, known as target language. Usually, the term complier is used for language translator of
high level languages into machine language. The compiler replaces single high level statement
with a series of machine language instruction. When a program is to be compiled, its compiler is
loaded into main memory. The compiler stores the entire high level program, scans it and
translates the whole program into an equivalent machine language program. During the
translation process, the computer reads the stored program and checks the syntax
(grammatically) errors. If there is any error, the compiler generates an error message, which is
usually displayed on the screen. In case of errors, the compiler will not create the object code
until all the errors are rectified.

Once the program, has been compiled, the resulting machine code is saved separately, and can
be run on its own at any time, that is, once the object code is generated, there is no need for the
actual source code. However, if the source code is modified then it is necessary to recompile the
program again to effect the changes.

Figure 5.2: Compiler

NOTE: for each high-level language a separate compiler is required. For example, a compiler for
C language cannot translate a program written in FORTRAN. Hence, to execute both language
programs, the host computer must have the compilers of both languages.

B. INTERPRETER: An interpreter is also a language translator and translates high-level


language into machine language. However, unlike compilers, it translates a statement in a
program and executes the statement immediately, that is, before translating the next source
language statement. When an error is encountered in the program, the execution of the program
is halted and an error message is displayed. Similar to compilers, every interpreted language such
as BASIC and LISP has its own interpreters.

Figure 5.3: Working of an Interpreter

There are fundamental similarities in the functioning of interpreter and compiler. However, there
are certain dissimilarities also, as given in the Table 5.1 below

Table 5.1: Similarities and dissimilarities between the functions of Interpreter and Compiler

Bases Complier Interpreter

Object Code A compiler provides a separate An interpreter does not generate a


object program permanent saved object code file

Translation A compiler converts the entire An interpreter translates the source


Process program into machine code at code line-wise, that is it would
once execute the current statement before
translating the next statement

Debugging Ease Removal of errors (debugging is Debugging becomes easier because


slow.) the errors are pointed out
immediately

Implementation By nature, compilers are complex Interpreters are easier to write


programs. Hence, they require because they are less complex
hard-core coding. They also programs. They also require less
require more memory to execute a memory for program execution.
program

Execution time Compilers are faster as compared Interpreters are slower as compared
to interpreters because all to compilers because each statement
statements are translated only is translated every time it is
once and saved in object files executed from the source program.
which can be executed anytime
without translating again
Nowadays, many languages use a hybrid translator having the characteristics of compiler as well
as interpreter. In such a case, the program is developed and debugged with the help of
interpreters and when the program becomes bug free, the compiler is used to compile it.

Advantages of High-Level Languages: High-level languages (HLL) are useful in developing


complex software, as they support complex data structures. It increases the programmer’s
productivity (the number of lines of code generated per hour), unlike assembly language, the
programmer does not need to learn the instruction set of each computer being worked with. The
various advantages of using high-level languages are discussed below:-

(a) Readability: Since high-level languages are closer to natural languages, they are easier
to learn and understand. In addition, a programmer does not need to be aware of computer
architecture even a common man can use it without much difficulty. This is the main reason of
HLL’s popularity.

(b) Machine Independent: High-level language are machine independent in the sense that a
program created using HLL can be used on different platforms with very little or no change at
all.

(c) Easy Debugging:- High-level languages includes the support for ideas of abstraction so that
programmers can concentrate on finding the solution to the problem rapidly, rather than on low-
level details of data representation, which results in fewer errors. Moreover, the compilers and
interpreters are designed in such a way that they detect and point out the errors instantaneously.
Hence, the programs are free from all syntax errors.

(d) Easier to Maintain:- As compared to machine and low-level language, the program written
in HLL are easily modified because HLL programs are easier to understand.

(e) Low Development Cost: High-level language permit faster development of programs
although a high-level program may not be as efficient as an equivalent machine and low-level
programs, but the savings in programmer time generally outweighs the inefficiencies of the
application.

(f) Easy Documentation: Since the statements written in HLL are similar to natural languages,
they can be easily understood by human beings. As a result, the code is obvious, that is, there is
few or no need for comments to be inserted in programs.

Disadvantages of High-Level Languages: There are two main disadvantages of high-level


language

i. Poor Control on Hardware: High-level language are developed to ease the pressure on
programmers so that they do not have to know the intricacies (complexity) of hardware. As a
result, sometimes the applications written in high-level languages cannot completely harness the
total power available at hardware level.

ii. Less Efficient: The HLL programs are less efficient as far as computation time is concerned.
This is because, unlike machine language, high-level languages must be created and sent through
another processing program known as a compiler. This process of translation increases the
execution time of an application programs written in high-level language, take more time to
execute, and require more memory space.

SOME POPULAR HIGH-LEVEL LANGUAGE

Although a number of languages evolved in the last five decades, only few language were
considered worthwhile to be marketed as commercial products. Some of the commonly used
high-level languages were discussed as follows:-

(a). FORTRAN: FORTRAN, or FORMULA TRANSLATOR, was developed by John Backus


for IBM 704 mainframes in 1957. The IBM 704 machines were considered as inseparable with
FORTRAN: They were the first machines to provide indexing and floating-point instruction in
hardware. FORTRAN gained immense popularity as compared to any of its counterpart and is
still extensively used to solve scientific and engineering problems.

The main feature of FORTRAN is that it can handle complex numbers very easily. However, the
syntax of FORTRAN is very rigid. A FORTRAN program is divided into sub-programs, each
sub-program is treated as a separate unit, and they are compiled separately. The compiled
programs are linked together at load time to make a complete application. It is not suitable for a
large amount of data as well and, hence, it is not often used for business applications.

(b) COBOL: COBOL, or common Business Oriented Language, has evolved after many design
revisions. Grace Murray Hopper, on behalf of US Defense was involved in the development of
COBOL as a language. She showed for the first time that a system could use an English
Language like syntax, suiting to the business notations rather than scientific notations. The first
version was released in 1960 and later revised in 1974 and 1984. COBOL was standardized with
revisions by ANSI in 1968.

COBOL is considered a robust language for the description of Input/ Output formats. It could
cope with large volumes of data. Due to its similarity with English, COBOL programs are easy
to read and write. Since, it uses English words rather than short abbreviations, the instructions
are self-documentary and self-explanatory. However, due to its large vocabulary, the programs
created using COBOL are difficult to translate. COBOL helped companies to perform
accounting work more effectively and efficiently.
(c). BASIC: Beginner’s All –Purpose Symbolic Instruction code, was developed by John
Kemeny and Thomas Kurtz at Darmouth College in the year 1960. It was the first interpreted
language made available for general use. It is now in such widespread use that most people see
and use this language before they deal with others. Presently many advanced versions of BASIC
are available and used in a variety of fields as business, science and engineering.

Basic program were traditionally interpreted. This meant that each line of code had to be
translated as the program was running. BASIC programs, therefore, ran slower than FORTRAN
programs. However, if a BASIC program crashed because of a programming error, it was much
easier to identify the source of the problem and in some cases the program could even be
restarted at the point where it broken down. In BASIC program, each statement is prefixed by a
line number, which serves a dual purpose to provide a table for every statement and to identify
the sequence in which the statement will be executed. BASIC is easy to learn as it uses common
English words. Therefore, it is a good language for beginners to learn their initial programming
skills.

(d). PASCAL: Named after Blaise Pascal, a French philosopher, mathematician, and physicist,
PASCAL was specifically designed as a teaching language. The language was developed by
Niklaus Wirth at the Federal Institute of Technology of Zurich in early 1970s.

PASCAL is a highly structured language, which forces programmers to design programs very
carefully. Its object was to force the student to correctly learn the techniques and requirement of
structured programming. PASCAL was designed to be platform—independent, that is a
PASCAL program could run correctly on any other computer, even with a different and
incompatible type of processor. The result was relatively slow operation, but it did work in its
own fashion.

(e). C: C was initially developed as an experimental language called A. Later on it was


improved, corrected and expanded until it was called B. This language in turn was improved,
upgraded, and debugged and was finally called C. C was developed by Dennis Ritchie at Bell
Labs in the mid1970s. It was originally designed for systems programming. The Unix operating
system, for example, was written in C. However, it can also be used for applications
programming. The compact nature of compiled code, plus its speed, made it useful for early PC
application.

C consists of rich collection of standard functions useful for managing system resources. It is
flexible, efficient, and easily available. Having syntax close to English words, it is an easy
language to learn and use. The applications generated using C language programs are portable,
that is, the programs written in C can be executed on multiple platforms. C works on a data
structure, which allows a simple data storage. It has the concept of pointers, the memory
addresses of variable and files.
(f). C++: this language was developed by Bjarne Stroustrup in early 1980s. It is the superset of C
and supports object oriented features. This language is used effectively in developing system
software as well as application software. As it was an extension of C, C++ maintained the
efficiency of C and added the power of inheritance. C++ works on classes and objects as a
backbone of object oriented programming. Being a superset of C, it is an extremely powerful and
efficient language. However, C++ is much harder to learn and understand than its predecessor C.
The salient feature of C++ are:-

 Strongly typed

 Case-sensitive

 Compiled and faster to execute

 Platform in dependent

(g). JAVA:- This language was developed by Sun Microsystems of USA in 1991. It was
originally called ‘Dak’. Java was designed for the development of software for consumer
electronic devices. As a result, Java came out to be a simple, reliable, portable, and powerful
language. This language truly implements all the object-oriented features. Java was developed
for internet and contributes a lot to its development. It handled certain issues like portability,
security, networking, and compatibility with various operating systems. It is immensely powered
on web and is used for creating scientific and business applications:-

The features of Java includes;

 Simple and robust language.

 Secured and safe.

 Truly object-oriented language.

 Portable and platform independent

 Multithreaded, distributed, and dynamic.

FOURTH GENERATION: 4 GL

Fourth generation language (4GLs) have simple, English-like syntax rules, commonly used to
access data bases. The third generation programming language are considered as procedural
languages because the programmer must list each step and must use logical control structures to
indicate the order in which instruction are to be executed 4GLs, on the other hand, are non-
procedural languages. The non-procedural method is simply to state the needed output instead of
specifying each step one after another to perform a task. In other words, the computer is
instructed WHAT it must do rather than HOW a computer must perform a task.

The non-procedural method is easier to write, but has less control over how each task is actually
performed. When using non-procedural languages, the methods used and the order in which each
task is carried out is left to the language itself; the user does not have any control over it. In
addition, 4GLs sacrifice computer efficiently in order to make programs easier to write. Hence,
they require more computer power and processing time, however, with the increase in power and
speed of hardware and with diminishing costs, the use of 4GLs have spread.

Fourth generation languages have a minimum number of syntax rules. Hence, people who have
not been trained as programmers can also use such languages to write applications programs.
This saves time and allows professional programmers for more complex tasks. The 4GLs are
divided into three categories:

1. Query Languages: they allow the user to retrieve information from databases by following
simple syntax rules. For example, the database may be requested to locate details of all
employees drawing a salary of more than $10000. Examples of query language are IBMs
structured Query Language (SQL) and Query-By-Example (QBE).

2. Report Generations:- They produce customized reports using data stored in a data base. The
user specifies the data to be in the reports format, and whether any subtotals and totals are
needed. Often report specifications are selected from pull-down menus, making report
generations very easy to use. Examples of report generators are Easytrieve plus by Pansophic
and R&R Relational Report Writer by Concentric Data systems.

3. Application Generations: with application generations, the user writes programs to allow
data to be entered into the database. The program prompts the user to enter the needed data. It
also checks the data for validity. Cincom Systems MANTIS and ADS by cullinet are example of
application generation.

Advantages of 4GLs:

The main advantage of 4GLs is that a user can create an application in a such shorter time for
development and debugging than with other programming languages. The programmer is only
interested in what has to be done and that too at a very high level. Being non-procedural in
nature, it does not require the programmers to provide the logic to perform a task. Therefore, a
lot of programming effort is saved as compared to 3GLs. Use of procedural templates and data
dictionaries allow automatic type checking (for the programmer and for user input) and this
results in fewer errors. Using application generations, the routine tasks are automated.
Disadvantages of 4GLs;

Since programs written in 4GL are quite lengthy, they need more disk space and a large memory
capacity as compared to 3GLs. These languages are inflexible also because the programmers
control over language and resources is limited as compared to other languages. These languages
cannot directly utilize the computer power available at hardware level as compared to other
levels of language.

FIFTH GENERATION: VERY HIGH-LEVEL LANGUAGE.

Fifth generation languages actually is a future concept. They are just like conceptual view of
what might be the future of programming languages. These languages will be able to process
natural languages. The computers would be able to accept, interpret, and execute instructions in
the nature or natural language of the end users. The user will be freed from learning any
programming language to communicate with the computers. The programmers may simply type
the instruction or simply tell the computer via microphones what it needs to do. Since these
languages are still in their infancy, only a few are currently commercially available. They are
closely linked to artificial intelligence and expert systems.

4.0 Summary
In this unit, you have learnt:
i. The classification and generations of computer programming languages
ii. Advantages and disadvantages of each generation
iii. Examples of languages in each generation
5.0 Self-Assessment
A Mention three (3) different programming languages according to their generations
B Differentiate between the generations of languages
C Explain the categories of 4GL
6.0 Tutor Marked Assessment
a. Differentiate between object code and source code
b. Differentiate between compiler and assembler
7.0 Further Reading
http://learnprogramming1.weebly.com/c/difference-between-source-code-and-object-code
https://en.wikibooks.org/wiki/A-level_Computing/AQA/
Computer_Components,_The_Stored_Program_Concept_and_the_Internet/
Fundamentals_of_Computer_Systems/Generations_of_programming_language
https://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol2/mjbn/article2.html

You might also like