100% found this document useful (1 vote)
243 views286 pages

315 11 - Digital Computer Organization

Uploaded by

Jeeva Sundaram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
243 views286 pages

315 11 - Digital Computer Organization

Uploaded by

Jeeva Sundaram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 286

.emaN e1.

sruIncrease
oC eht fothe
ezifont
s tnosize
f ehof
t esthe
aerCourse
cnI .1 Name.


.egaP revoC e2.ht nuse
i rethe
daefollowing
h a sa gniwasolaloheader
f eht esin
u the
.2 Cover Page.

YTISREVINUALAGAPPA
APPAGALAUNIVERSITY
Master of Computer Applications


elcyC drihT eht ni )46.3:APGC( CA[Accredited
AN yb edarGwith
’+A’’A+’
htiwGrade
detidby
ercNAAC
cA[ (CGPA:3.64) in the Third Cycle
]CGU-DRHM yb ytisrevinU I–yrogeand
taC Graded
sa dedarasG Category–I
dna University by MHRD-UGC]
300 036 – IDUKIARA
KARAIKUDI
K – 630 003
315 11 NOITACUDE ECNATSIDDIRECTORATE
FO ETAROTCEOF
RIDDISTANCE EDUCATION

DIGITAL COMPUTER ORGANIZATION


I - Semester

DIGITAL COMPUTER ORGANIZATION


Master of Computer Applications
315 11

itnem sYou
a egaare
p reinstructed
voc eht etatodpupdate
u ot dethe
tcurcover
tsni erpage
a uoYas mentioned below:
.emaN e1.sruIncrease
oC eht fothe
ezifont
s tnosize
f ehof
t esthe
aerCourse
cnI .1 Name.
aP revoC e2.ht nuse
i rethe
daefollowing
h a sa gniwasolaloheader
f eht esin
u the
.2 Cover Page.

ISREVINUALAGAPPA
APPAGALAUNIVERSITY
Master of Computer Applications
rihT eht ni )46.3:APGC( CA[Accredited
AN yb edarGwith
’+A’’A+’
htiwGrade
detidby
ercNAAC
cA[ (CGPA:3.64) in the Third Cycle
]CGU-DRHM yb ytisrevinU I–yrogeand
taC Graded
sa dedarasG Category–I
dna University by MHRD-UGC]
300 036 – IDUKIARA
KARAIKUDI
K
ITACUDE ECNATSIDDIRECTORATE
FO ETAROTCEOF
– 630 003
RIDDISTANCE EDUCATION
DIGITAL COMPUTER ORGANIZATION
I - Semester
ALAGAPPA UNIVERSITY
[Accredited with ‘A+’ Grade by NAAC (CGPA:3.64) in the Third Cycle
and Graded as Category–I University by MHRD-UGC]
(A State University Established by the Government of Tamil Nadu)
KARAIKUDI – 630 003

Master of Computer Applications


I - Semester
315 11

DIGITAL COMPUTER
ORGANIZATION
Reviewer

Assistant Professor in Computer Science,


Mr. S. Balasubramanian Directorate of Distance Education,
Alagappa University, Karaikudi

Authors
B Basavaraj, Former Principal and HOD, Department of Electronics and Communication Engineering, SJR College of Science,
Arts & Commerce
Units (1.0-1.3, 1.5-1.10, 2, 3.0-3.2, 4, 5)
Satish K Karna, Ex-Educational Consultant Karna Institute of Technology, Chandigarh
Units (1.4, 3.3-3.10, 6)
Deepti Mehrotra, Professor, Amity School of Engineering and Technology, Amity University, Noida
Units (7-14)

"The copyright shall be vested with Alagappa University"

All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.

Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.

Vikas® is the registered trademark of Vikas® Publishing House Pvt. Ltd.


VIKAS® PUBLISHING HOUSE PVT. LTD.
E-28, Sector-8, Noida - 201301 (UP)
Phone: 0120-4078900  Fax: 0120-4078999
Regd. Office: 7361, Ravindra Mansion, Ram Nagar, New Delhi 110 055
 Website: www.vikaspublishing.com  Email: [email protected]

Work Order No. AU/DDE/DE1-238/Preparation and Printing of Course Materials/2018 Dated 30.08.2018 Copies - 500
SYLLABI-BOOK MAPPING TABLE
Digital Computer Organization

BLOCK I: NUMBER SYSTEMS


Unit - 1: Number Systems: Binary, Octal, Decimal and Hexadecimal Unit 1: Number System
Number Systems – Conversion from One Base to Another Base – Use of (Pages 1-30)
Complements – Binary Arithmetic – Numeric and Character Codes.
Unit - 2: Boolean Algebra and Combinational Circuits: Fundamental Unit 2: Boolean Algebra and
Concepts of Boolean Algebra – De Morgan’s Theorems Combinational Circuits
Unit - 3: Simplification of Expressions: Sum of Products and Products (Pages 31-52)
of Sums – Karnaugh Map Simplification – Quine - McKluskey Method – Unit 3: Simplification of Expressions
Two Level Implementation of Combinational Circuits. (Pages 53-81)

BLOCK II: COMBINATIONAL CIRCUITS AND SEQUENTIAL


CIRCUITS
Unit - 4: Combinational Circuits: Half Adder – Full Adder – Subtractors Unit 4: Combinational Circuits
– Decoders – Encoders – Multiplexers – Demultiplexer. (Pages 82-101)
Unit - 5: Sequential Circuits: Flip flops – Registers – Shift Registers – Unit 5: Sequential Circuits
Binary Counters – BCD Counters – Memory Unit. (Pages 102-129)
Unit - 6: Data Representation: Data Types – Complements – Fixed Point Unit 6: Data Representation
Representations – Floating Point Representations – Other Binary Codes (Pages 130-153)
– Error detection codes.

BLOCK III : BASIC COMPUTER ORGANIZATION AND DESIGN


Unit -7: Instruction Codes: Instruction Codes – Computer Registers – Unit 7: Instruction Codes
Computer Instructions – Timing and Control (Pages 154-162)
Unit - 8: Instruction Cycle: Memory reference instructions – Input Output Unit 8: Instruction cycle
and Interrupt – Complete Computer Description – Design on Basic (Pages 163-176)
Computer – Design of Accumulator Logic

BLOCK IV: CENTRAL PROCESSING UNIT


Unit - 9: Introduction: General Register organization – Stack Organization Unit 9: Introduction to CPU
Unit - 10: Instruction Formats: Addressing Modes – Data Transfer and (Pages 177-189)
Manipulation – Program Control. Unit 10: Instruction Formats
Unit - 11: Input – Output Organization: Peripheral devices – Input output (Pages 190-216)
interface – Asynchronous Data Transfer – Modes of Transfer Unit 11: Input-Output Organization
Unit - 12: Priority Interrupt: DMA – IOP – Serial Communication. (Pages 217-236)

BLOCK V: MEMORY ORGANIZATION


Unit-13: Memory Hierarchy: Main memory – Auxiliary Memory – Unit 12: Priority Interrupt
Associative Memory (Pages 237-253)
Unit - 14: Memory Organization: Cache memory – Virtual Memory – Unit 13: Memory
Memory Management Hardware. (Pages 254-268)
Unit 14: Memory Organization
(Pages 269-274)
CONTENTS
INTRODUCTION

BLOCK I: NUMBER SYSTEMS


UNIT 1 NUMBER SYSTEM 1-30
1.0 Introduction
1.1 Objectives
1.2 Number Systems
1.2.1 Decimal Number System
1.2.2 Binary Number System
1.2.3 Octal Number System
1.2.4 Hexadecimal Number System
1.2.5 Conversion from One Number System to the Other
1.3 Binary Arithmetic
1.3.1 Binary Addition
1.3.2 Binary Subtraction
1.3.3 Binary Multiplication
1.3.4 Binary Division
1.4 Complements
1.5 Numeric and Character Codes
1.6 Answers to Check Your Progress Questions
1.7 Summary
1.8 Key Words
1.9 Self Assessment Questions and Exercises
1.10 Further Readings

UNIT 2 BOOLEAN ALGEBRA AND COMBINATIONAL CIRCUITS 31-52


2.0 Introduction
2.1 Objectives
2.2 Logic Gates and Inverter
2.2.1 AND Gate
2.2.2 OR Gate
2.2.3 NAND Gate
2.2.4 NOR Gate
2.2.5 Exclusive OR (XOR) Gates
2.2.6 Exclusive NOR Gates
2.3 Boolean Algebra and Logic Simplification
2.3.1 Laws and Rules of Boolean Algebra
2.3.2 De-Morgan’s Theorems
2.3.3 Simplification of Logic Expressions using Boolean Algebra
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings
UNIT 3 SIMPLIFICATION OF EXPRESSIONS 53-81
3.0 Introduction
3.1 Objectives
3.2 SOP and POS Expressions
3.2.1 Minterm
3.2.2 Maxterm
3.2.3 Deriving Sum of Product (SOP) Expression
3.2.4 Deriving Product of Sum (POS) Expression from a Truth Table
3.3 Karnaugh Map (K-map)
3.3.1 K-Map Simplification for Two Variables Using SOP Form
3.3.2 K-Map with Three Variables Using SOP Form
3.3.3 K-Map Simplification for Four Variables Using SOP Form
3.3.4 Five-Variable K-Map
3.3.5 K-Map Using POS Form
3.4 Quine–McCluskey Method
3.5 Two Level Implementation of Combinational Circuits
3.5.1 Types of Combinational Circuits
3.5.2 Implementation of Combinational Circuits
3.6 Answers to Check Your Progress Questions
3.7 Summary
3.8 Key Words
3.9 Self Assessment Questions and Exercises
3.10 Further Readings
BLOCK II: COMBINATIONAL CIRCUITS AND SEQUENTIAL CIRCUITS
UNIT 4 COMBINATIONAL CIRCUITS 82-101
4.0 Introduction
4.1 Objectives
4.2 Combinational Logic
4.3 Adders and Subtractors
4.3.1 Full-Adder
4.3.2 Half-Subtractor
4.3.3 Full-Subtractor
4.4 Decoders
4.4.1 3-Line-to-8-Line Decoder
4.5 Encoders
4.5.1 Octal-to-Binary Encoder
4.6 Multiplexer
4.7 Demultiplexer
4.7.1 Basic Two-Input Multiplexer
4.7.2 Four-Input Multiplexer
4.8 Answers to Check Your Progress Questions
4.9 Summary
4.10 Key Words
4.11 Self Assessment Questions and Exercises
4.12 Further Readings

UNIT 5 SEQUENTIAL CIRCUITS 102-129


5.0 Introduction
5.1 Objectives
5.2 Flip-flops
5.2.1 S-R Flip-Flop
5.2.2 D Flip-Flop
5.2.3 J-K Flip-Flop
5.2.4 T Flip-Flop
5.2.5 Master–Slave Flip-Flops
5.3 Registers
5.3.1 Shift Registers Basics
5.3.2 Serial In/Serial Out Shift Registers
5.3.3 Serial In/Parallel Out Shift Registers
5.3.4 Parallel In/Serial Out Shift Registers
5.3.5 Parallel In/Parallel out Registers
5.4 Counters
5.4.1 Asynchronous Counter Operations
5.4.2 Synchronous Counter Operations
5.4.3 Design of Synchronous Counters
5.5 Answers to Check Your Progress Questions
5.6 Summary
5.7 Key Words
5.8 Self Assessment Questions and Exercises
5.9 Further Readings

UNIT 6 DATA REPRESENTATION 130-153


6.0 Introduction
6.1 Objectives
6.2 Data Types
6.3 Fixed Point Representation
6.4 Floating Point Reprsentation
6.5 Codes
6.5.1 Weighted Binary Codes
6.5.2 Non-weighted Binary Codes
6.6 Error Detection and Correction Codes
6.7 Answers to Check Your Progress Questions
6.8 Summary
6.9 Key Words
6.10 Self Assessment Questions and Exercises
6.11 Further Readings
BLOCK III: BASIC COMPUTER ORGANIZATION AND DESIGN
UNIT 7 INSTRUCTION CODES 154-162
7.0 Introduction
7.1 Objectives
7.2 Instruction Codes
7.2.1 Instruction Formats
7.2.2 Instruction Types
7.3 Computer Registers
7.4 Computer Instructions
7.4.1 Timing and Control
7.5 Answers to Check Your Progress Questions
7.6 Summary
7.7 Key Words
7.8 Self Assessment Questions and Exercises
7.9 Further Readings

UNIT 8 INSTRUCTION CYCLE 163-176


8.0 Introduction
8.1 Objectives
8.2 Complete Computer Description
8.2.1 Basic Anatomy of a Computer
8.2.2 Basic Design and Components of a Computer
8.2.3 Data Representation within the Computer
8.3 Instruction Cycle
8.4 Memory Reference Instructions
8.4.1 Memory Reference Format
8.5 Input/Output and Interrupt
8.6 Answers to Check Your Progress Questions
8.7 Summary
8.8 Key Words
8.9 Self Assessment Questions and Exercises
8.10 Further Readings
BLOCK IV: CENTRAL PROCESSING UNIT
UNIT 9 INTRODUCTION TO CPU 177-189
9.0 Introduction
9.1 Objectives
9.2 Organization of CPU Control Registers
9.2.1 Organization of Registers in Different Computers
9.2.2 Issues Related to Register Sets
9.3 Stack Organization
9.4 Answers to Check Your Progress Questions
9.5 Summary
9.6 Key Words
9.7 Self Assessment Questions and Exercises
9.8 Further Readings

UNIT 10 INSTRUCTION FORMATS 190-216


10.0 Introduction
10.1 Objectives
10.2 Instruction Formats
10.2.1 Representation of Different Instruction Formats
10.3 Addressing Modes
10.4 Manipulation of Data Transfer and Control Program
10.4.1 Length of Instructions
10.4.2 Allocation of Bits
10.4.3 Types of Instructions
10.5 Answers to Check Your Progress Questions
10.6 Summary
10.7 Key Words
10.8 Self Assessment Questions and Exercises
10.9 Further Readings
UNIT 11 INPUT-OUTPUT ORGANIZATION 217-236
11.0 Introduction
11.1 Objectives
11.2 Peripheral Devices
11.2.1 Storage Devices: Hard Disk
11.2.2 Human-interactive I/O Devices
11.3 Input/Output (I/O) Interface
11.3.1 Problems in I/O Device Management
11.3.2 Aims of I/O Module
11.3.3 Functions of I/O Interface
11.3.4 Steps in I/O Communication with Peripheral Devices
11.3.5 Commands Received by an Interface
11.4 Asynchronous Data Transfer
11.4.1 Strobe Control
11.4.2 Handshaking
11.4.3 Asynchronous Serial and Parallel Transfers
11.5 Modes of Data Transfer
11.6 Answers to Check Your Progress Questions
11.7 Summary
11.8 Key Words
11.9 Self Assessment Questions and Exercises
11.10 Further Readings

UNIT 12 PRIORITY INTERRUPT 237-253


12.0 Introduction
12.1 Objectives
12.2 Priority Interrupt
12.2.1 Techniques of Priority Interrupt
12.2.2 Parallel Priority Interrupt
12.3 Direct Memory Access (DMA)
12.3.1 DMA Controller
12.4 Input/Output Processor (IOP)
12.5 Serial Communication
12.6 Answers to Check Your Progress Questions
12.7 Summary
12.8 Key Words
12.9 Self Assessment Questions and Exercises
12.10 Further Readings
BLOCK V: MEMORY ORGANIZATION
UNIT 13 MEMORY 254-268
13.0 Introduction
13.1 Objectives
13.2 Memory Hierarchy
13.3 Main Memory
13.3.1 RAM
13.3.2 ROM
13.4 Auxiliary Memory
13.5 Associative Memory
13.6 Answer to Check Your Progress Questions
13.7 Summary
13.8 Key Words
13.9 Self Assessment Questions and Exercises
13.10 Further Readings

UNIT 14 MEMORY ORGANIZATION 269-274


14.0 Introduction
14.1 Objectives
14.2 Cache Memory
14.3 Virtual Memory
14.4 Memory Management Hardware
14.5 Answers to Check Your Progress Questions
14.6 Summary
14.7 Key Words
14.8 Self Assessment Questions and Exercises
14.9 Further Readings
Introduction
INTRODUCTION

The term ‘digital’ has become quite common in this age of constantly improving
NOTES technology. It is most commonly used in the fields of electronics and computing
wherein information is transformed into binary numeric form, as in digital
photography or digital audio. Digital systems use discontinuous or discrete values
for representation of information for processing, storage, transmission and input
whereas analog systems use continuous values for the representation of information.
Computer organization helps in optimizing performance-based products.
Software engineers need to know the processing ability of processors. They may
need to optimize software in order to gain the most performance at the least expense.
This can require quite detailed analysis of the computer organization. In a multimedia
decoder, the designers might need to arrange for most data to be processed in the
fastest data path and the various components are assumed to be in place and task
is to investigate the organizational structure to verify the computer parts operates.
Computer organization also helps in planning the selection of a processor for a
particular project. Sometimes certain tasks need additional components as well.
For example, a computer capable of virtualization needs virtual memory hardware
so that the memory of different simulated computers can be kept separated. The
computer organization and features also affect the power consumption and the
cost of the processor.
This book, Digital Computer Organization, follows the self-instruction
mode or the SIM format wherein each unit begins with an ‘Introduction’ to the
topic followed by an outline of the ‘Objectives’. The content is presented in a
simple and structured form interspersed with ‘Check Your Progress’ questions for
better understanding. At the end of the each unit a list of ‘Key Words’ is provided
along with a ‘Summary’ and a set of ‘Self Assessment Questions and Exercises’
for effective recapitulation.

Self-Instructional
8 Material
Number System
BLOCK I
NUMBER SYSTEMS

NOTES
UNIT 1 NUMBER SYSTEM
Structure
1.0 Introduction
1.1 Objectives
1.2 Number Systems
1.2.1 Decimal Number System
1.2.2 Binary Number System
1.2.3 Octal Number System
1.2.4 Hexadecimal Number System
1.2.5 Conversion from One Number System to the Other
1.3 Binary Arithmetic
1.3.1 Binary Addition
1.3.2 Binary Subtraction
1.3.3 Binary Multiplication
1.3.4 Binary Division
1.4 Complements
1.5 Numeric and Character Codes
1.6 Answers to Check Your Progress Questions
1.7 Summary
1.8 Key Words
1.9 Self Assessment Questions and Exercises
1.10 Further Readings

1.0 INTRODUCTION

In this unit, you will learn about number systems and binary codes. In mathematics,
a 'number system' is a set of numbers together with one or more operations, such
as addition or multiplication. The number systems are represented as natural
numbers, integers, rational numbers, algebraic numbers, real numbers, complex
numbers, etc. A number symbol is called a numeral. A numeral system or system
of numeration is a writing system for expressing numbers. For example, the standard
decimal representation of whole numbers gives every whole number a unique
representation as a finite sequence of digits. You will learn about the binary numeral
system or base-2 number system that represents numeric values using two symbols,
0 and 1. This base-2 system is specifically a positional notation with a radix of 2.
It is implemented in digital electronic circuitry using logic gates and the binary
system used by all modern computers. Since binary is a base-2 system, hence
each digit represents an increasing power of 2 with the rightmost digit representing
20, the next representing 21, then 22, and so on. To determine the decimal
Self-Instructional
Material 1
Number System representation of a binary number simply take the sum of the products of the
binary digits and the powers of 2 which they represent. You will also learn about
octal, decimal and hexadecimal numeral system.

NOTES
1.1 OBJECTIVES

After going through this unit, you will be able to:


 Describe number systems
 Understand the decimal, binary, octal and hexadecimal number systems
 Convert a number from one number system into another number system
 Perform arithmetic operations such as binary addition, subtraction,
multiplication and division
 Understand numeric and character codes

1.2 NUMBER SYSTEMS

A number is an idea that is used to refer amount of things. People use number
words, number gestures and number symbols. Number words are said out loud.
Number gestures are made with some part of the body, usually the hands. Number
symbols are marked or written down. A number symbol is called a numeral. The
number is the idea we think of when we see the numeral, or when we see or hear
the word.
On hearing the word number, we immediately think of the familiar decimal
number system with its 10 digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. These numerals
are called Arabic numerals. Our present number system provides modern
mathematicians and scientists with great advantages over those of previous
civilizations and is an important factor in our advancement. Since fingers are the
most convenient tools nature has provided, human beings use them in counting.
So, the decimal number system followed naturally from this usage.
A number of base, or radix r, is a system that uses distinct symbols of r
digits. Numbers are represented by a string of digit symbols. To determine the
quantity that the number represents, it is necessary to multiply each digit by an
integer power of r and then form the sum of all the weighted digits. It is possible to
use any whole number greater than one as a base in building a numeration system.
The number of digits used is always equal to the base.
There are four systems of arithmetic which are often used in digital systems.
These systems are as follows:
1. Decimal
2. Binary
3. Octal
Self-Instructional 4. Hexadecimal
2 Material
In any number system, there is an ordered set of symbols known as digits. Number System

Collection of these digits makes a number which in general has two parts, integer
and fractional, set apart by a radix point (.). Hence, a number system can be
represented as,
NOTES
n 3 ... a1a0  a
an 1an  2a 1a2 
a
3 ... a– 
Nb̂ = 
Integer Portion

Fractional Portion
m

where, N = A number.
b = Radix or base of the number system.
n = Number of digits in integer portion.
m = Number of digits in fractional portion.
an – 1 = Most Significant Digit (MSD).
a– m = Least Significant Digit (LSD).
and 0  (ai or a–f )  b–1
Base or Radix: The base or radix of a number is defined as the number of
different digits which can occur in each position in the number system.

1.2.1 Decimal Number System


The number system which utilizes ten distinct digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and
9 is known as decimal number system. It represents numbers in terms of groups of
ten, as shown in Figure 1.1.
We would be forced to stop at 9 or to invent more symbols if it were not for
the use of positional notation. It is necessary to learn only 10 basic numbers and
positional notational system in order to count any desired figure.

Fig. 1.1 Decimal Position Values as Powers of 10

The decimal number system has a base or radix of 10. Each of the ten
decimal digits 0 through 9, has a place value or weight depending on its position.
The weights are units, tens, hundreds, and so on. The same can be written as the
power of its base as 100, 101, 102, 103,..., etc. Thus, the number 1993 represents
quantity equal to 1000 + 900 + 90 + 3. Actually, this should be written as {1 ×
103 + 9 × 102 + 9 × 101 + 3 × 100}. Hence, 1993 is the sum of all digits multiplied
by their weights. Each position has a value 10 times greater than the position to its
right.

Self-Instructional
Material 3
Number System For example, the number 379 actually stands for the following representation.
100 10 1
2 1
10 10 100
NOTES 3 7 9
3 × 100 + 7 × 10 + 9 × 1
 37910 = 3 × 100 + 7 × 10 + 9 × 1
In this example, 9 is the Least Significant Digit (LSD) and 3 is the Most
Significant Digit (MSD).
Example 1.1: Write the number 1936.469 using decimal representation.
Solution: 1936.46910 = 1 × 103 + 9 × 102 + 3 × 101 + 6 × 100 + 4 × 10–1
+ 6 × 10–2 + 9 × 10–3
= 1000 + 900 + 30 + 6 + 0.4 + 0.06 + 0.009
It is seen that powers are numbered to the left of the decimal point starting
with 0 and to the right of the decimal point starting with –1.
The general rule for representing numbers in the decimal system by using
positional notation is as follows:
anan – 1 ... a2a1a0 = an10n + an – 110n–1 + ... a2102 + a1101 + a0100
Where n is the number of digits to the left of the decimal point.

1.2.2 Binary Number System


A number system that uses only two digits, 0 and 1 is called the binary number
system. The binary number system is also called a base two system. The two
symbols 0 and 1 are known as bits (binary digits).
The binary system groups numbers by two’s and by powers of two as
shown in Figure 1.2. The word binary comes from a Latin word meaning two at a
time.

Fig. 1.2 Binary Position Values as a Power of 2

The weight or place value of each position can be expressed in terms of


2 and is represented as 20, 21, 22, etc. The least significant digit has a weight of
20 (= 1). The second position to the left of the least significant digit is multiplied by
21 (= 2). The third position has a weight equal to 22 (= 4). Thus, the weights are
in the ascending powers of 2 or 1, 2, 4, 8, 16, 32, 64, 128, etc.

Self-Instructional
4 Material
The numeral 102 (one, zero, base two) stands for two, the base of the Number System

system.
In binary counting, single digits are used for none and one. Two digit
numbers are used for 102 and 112 [2 and 3 in decimal numerals]. For the next
NOTES
counting number, 1002 (4 in decimal numerals) three digits are necessary. After
1112 (7 in decimal numerals) four-digit numerals are used until 11112 (15 in
decimal numerals) is reached, and so on. In a binary numeral, every position
has a value 2 times the value of the position to its right.
A binary number with 4 bits, is called a nibble and a binary number with 8
bits is known as a byte.
For example, the number 10112 actually stands for the following
representation:
10112 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20
=1 × 8 + 0 × 4 +1 × 2 + 1 ×1
 10112 = 8 + 0 + 2 + 1 = 1110
In general,
[bnbn – 1 ... b2, b1, b0]2 = bn2n + bn – 12n–1 + ... + b222 + b121 + b020
Similarly, the binary number 10101.011 can be written as follows:
1 0 1 0 1 . 0 1 1
24 23 22 21 20 . 2– 1 2– 2 2– 3
(MSD) (LSD)
 10101.0112 = 1 × 24 + 0 × 23 + 1 × 22 + 0 × 21 + 1 × 20
+ 0 × 2–1 + 1 × 2–2 + 1 × 2–3
= 16 + 0 + 4 + 0 + 1 + 0 + 0.25 + 0.125 = 21.37510
In each binary digit, the value increases in powers of two starting with 0 to
the left of the binary point and decreases to the right of the binary point starting
with power –1.

Why Binary Number System is used in Digital Computers?


Binary number system is used in digital computers because all electrical and
electronic circuits can be made to respond to the two states concept. A switch, for
instance, can be either opened or closed, only two possible states exist. A transistor
can be made to operate either in cutoff or saturation, a magnetic tape can be either
magnetized or non magnetized, a signal can be either HIGH or LOW, a punched
tape can have a hole or no hole. In all of the above illustrations, each device is
operated in any one of the two possible states and the intermediate condition does
not exist. Thus, 0 can represent one of the states and 1 can represent the other.
Hence, binary numbers are convenient to use in analysing or designing digital circuits.
Self-Instructional
Material 5
Number System 1.2.3 Octal Number System
The octal number system was used extensively by early minicomputers. However,
for both large and small systems, it has largely been supplanted by the hexadecimal
NOTES system. Sets of 3-bit binary numbers can be represented by octal numbers and
this can be conveniently used for the entire data in the computer.
A number system that uses eight digits, 0, 1, 2, 3, 4, 5, 6 and 7, is called an octal
number system. It has a base of eight. The digits, 0 through 7 have exactly the
same physical meaning as decimal symbols. In this system, each digit has a weight
corresponding to its position as shown below:
an8n + ... a383 + a282 + a181 + a– 18–1 + a– 28–2 + ... + a– n8–n
Octal Odometer
Octal odometer is a hypothetical device similar to the odometer of a car. Each
display wheel of this odometer contains only eight digits (teeth), numbered 0 to 7.
When a wheel turns from 7 back to 0 after one rotation, it sends a carry to the
next higher wheel. Table 1.1 shows equivalent numbers in decimal, binary and
octal systems.
Table 1.1 Equivalent Numbers in Decimal, Binary and Octal Systems

Decimal (Radix 10) Binary (Radix 2) Octal (Radix 8)

0 000 000 0
1 000 001 1
2 000 010 2
3 000 011 3
4 000 100 4
5 000 101 5
6 000 110 6
7 000 111 7
8 001 000 10
9 001 001 11
10 001 010 12
11 001 011 13
12 001 100 14
13 001 101 15
14 001 110 16
15 001 111 17
16 010 000 20

Consider an octal number [567.3]8. It is pronounced as five, six, seven octal


point three and not five hundred sixty seven point three. The coefficients of the
integer part are a0 = 7, a1 = 6, a2 = 5 and the coefficient of the fractional part is
a– 1 = 3.

Self-Instructional
6 Material
1.2.4 Hexadecimal Number System Number System

The hexadecimal system groups numbers by sixteen and powers of sixteen.


Hexadecimal numbers are used extensively in microprocessor work. Most
minicomputers and microcomputers have their memories organized into sets of NOTES
bytes, each consisting of eight binary digits. Each byte either is used as a single
entity to represent a single alphanumeric character or broken into two 4-bit pieces.
When the bytes are handled in two 4-bit pieces, the programmer is given the
option of declaring each 4-bit character as a piece of a binary number or as two
BCD numbers.
The hexadecimal number is formed from a binary number by grouping bits
in groups of 4 bits each, starting at the binary point. This is a logical way of grouping,
since computer words come in 8 bits, 16 bits, 32 bits, and so on. In a group of 4
bits, the decimal numbers 0 to 15 can be represented as shown in Table 1.2.
The hexadecimal number system has a base of 16. Thus, it has 16 distinct
digit symbols. It uses the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9 plus the letters A, B,
C, D, E and F as 16 digit symbols. The relationship among octal, hexadecimal and
binary is shown in Table 1.2. Each hexadecimal number represents a group of four
binary digits.
Table 1.2 Equivalent Numbers in Decimal, Binary, Octal and Hexadecimal Number Systems

Decimal Binary Octal Hexadecimal


(Radix 10) (Radix 2) (Radix 8) (Radix 16)
0 0000 0 0
1 0001 1 1
2 0010 2 2
3 0011 3 3
4 0100 4 4
5 0101 5 5
6 0110 6 6
7 0111 7 7
8 1000 10 8
9 1001 11 9
10 1010 12 A
11 1011 13 B
12 1100 14 C
13 1101 15 D
14 1110 16 E
15 1111 17 F
16 0001 0000 20 10
17 0001 0001 21 11
18 0001 0010 22 12
19 0001 0011 23 13
20 0001 0100 24 14

Self-Instructional
Material 7
Number System Counting in Hexadecimal
When counting in hex, each digit can be incremented from 0 to F. Once it reaches
F, the next count causes it to recycle to 0 and the next higher digit is incremented.
This is illustrated in the following counting sequences: 0038, 0039, 003A, 003B,
NOTES
003C, 003D, 003E, 003F, 0040; 06B8, 06B9, 06BA, 06BB, 06BC, 06BD,
06BE, 06BF, 06C0, 06C1.
1.2.5 Conversion from One Number System to the Other
Binary to Decimal Conversion
A binary number can be converted into decimal number by multiplying the binary
1 or 0 by the weight corresponding to its position and adding all the values.
Example 1.2: Convert the binary number 110111 to decimal number.
Solution: 1101112 = 1 × 25 + 1 × 24 + 0 × 23 + 1 × 22 + 1 × 21 + 1 × 20
= 1 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 1 × 1
= 32 + 16 + 0 + 4 + 2 + 1
= 5510
We can streamline binary to decimal conversion by the following procedure:
Step 1: Write the binary, i.e., all its bits in a row.
Step 2: Write 1, 2, 4, 8, 16, 32, ..., directly under the binary number working
from right to left.
Step 3: Omit the decimal weight which lies under zero bits.
Step 4: Add the remaining weights to obtain the decimal equivalent.
The same method is used for binary fractional number.
Example 1.3: Convert the binary number 11101.1011 into its decimal
equivalent.
Solution:
Step 1: 1 1 1 0 1 . 1 0 1 1

Binary Point
Step 2: 16 8 4 2 1 . 0.5 0.25 0.125 0.0625
Step 3: 16 8 4 0 1 . 0.5 0 0.125 0.0625
Step 4: 16 + 8 + 4 + 1 + 0.5 + 0.125 + 0.0625 = [29.6875]10
Hence, [11101.1011]2 = [29.6875]10
Table 1.3 lists the binary numbers from 0000 to 10000. Table 1.4 lists
powers of 2 and their decimal equivalents and the number of K. The abbreviation
K stands for 210 = 1024. Therefore, 1K = 1024, 2K = 2048, 3K = 3072, 4K =
4096, and so on. Many personal computers have 64K memory this means that
Self-Instructional computers can store up to 65,536 bytes in the memory section.
8 Material
Table 1.3 Binary Numbers Table 1.4 Powers of 2 Number System

Decimal Binary Powers of 2 Equivalent Abbreviation


0
0 0 2 1
1
1 01 2 2
2
NOTES
2 10 2 4
3 11 23 8
4
4 100 2 16
5 101 25 32
6
6 110 2 64
7
7 111 2 128
8 1000 28 256
9
9 1001 2 512
10
10 1010 2 1024 1K
11 1011 211 2048 2K
12
12 1100 2 4096 4K
13 1101 213 8192 8K
14
14 1110 2 16384 16K
15
15 1111 2 32768 32K
16 10000 216 65536 64K

Decimal to Binary Conversion


There are several methods for converting a decimal number to a binary number.
The first method is simply to subtract values of powers of 2 which can be subtracted
from the decimal number until nothing remains. The value of the highest power of
2 is subtracted first, then the second highest, and so on.
Example 1.4: Convert the decimal integer 29 to the binary number system.
Solution: First the value of the highest power of 2 which can be subtracted from
29 is found. This is 24 = 16.
Then, 29 – 16 = 13
The value of the highest power of 2 which can be subtracted from 13, is 23,
then 13 – 23 = 13 – 8 = 5. The value of the highest power of 2 which can be
subtracted from 5, is 22. Then 5 – 22 = 5 – 4 = 1. The remainder after subtraction
is 10 or 20. Therefore, the binary representation for 29 is given by,
2910 = 24 + 23 + 22 + 20 = 16 + 8 + 4 + 0 × 2 + 1
=1 1 1 0 1
[29]10 = [11101]2
Similarly, [25.375]10 = 16 + 8 + 1 + 0.25 + 0.125
= 24 + 23 + 0 + 0 + 20 + 0 + 2–2 + 2–3
[25.375]10 = [11011.011]2
Self-Instructional
Material 9
Number System This is a laborious method for converting numbers. It is convenient for small numbers
and can be performed mentally, but is less used for larger numbers.
Double Dabble Method
NOTES A popular method known as double dabble method, also known as divide-by-
two method, is used to convert a large decimal number into its binary equivalent.
In this method, the decimal number is repeatedly divided by 2 and the remainder
after each division is used to indicate the coefficient of the binary number to be
formed. Notice that the binary number derived is written from the bottom up.
Example 1.5: Convert 19910 into its binary equivalent.
Solution: 199  2 = 99 + remainder 1 (LSB)
99  2 = 49 + remainder 1
49  2 = 24 + remainder 1
24  2 = 12 + remainder 0
12  2 = 6 + remainder 0
62 = 3 + remainder 0
32 = 1 + remainder 1
12 = 0 + remainder 1 (MSB)
The binary representation of 199 is, therefore, 11000111. Checking the
result we have,
[11000111]2 = 1 × 27 + 1 × 26 + 0 × 25 + 0 × 24 + 0 × 23 + 1 × 22 + 1 × 21
+ 1 × 20
= 128 + 64 + 0 + 0 + 0 + 4 + 2 + 1
 [11000111]2 = [199]10
Notice that the first remainder is the LSB and last remainder is the MSB.
This method will not work for mixed numbers.
Decimal Fraction to Binary
The conversion of decimal fraction to binary fractions may be accomplished by
using several techniques. Again, the most obvious method is to subtract the highest
value of the negative power of 2, which may be subtracted from the decimal
fraction. Then, the next highest value of the negative power of 2 is subtracted from
the remainder of the first subtraction and this process is continued until there is no
remainder or to the desired precision.
Example 1.6: Convert decimal 0.875 to a binary number.
Solution: 0.875 – 1 × 2–1 = 0.875 – 0.5 = 0.375
0.375 – 1 × 2–2 = 0.375 – 0.25 = 0.125
0.125 – 1 × 2–3 = 0.125 – 0.125 = 0
 [0.875]10 = [0.111]2
Self-Instructional
10 Material
A much simpler method of converting longer decimal fractions to binary Number System

consists of repeatedly multiplying by 2 and recording any carriers in the integer


position.
Example 1.7: Convert 0.694010 to a binary number. NOTES
Solution: 0.6940 × 2 = 1.3880 = 0.3880 with a carry of 1
0.3880 × 2 = 0.7760 = 0.7760 with a carry of 0
0.7760 × 2 = 1.5520 = 0.5520 with a carry of 1
0.5520 × 2 = 1.1040 = 0.1040 with a carry of 1
0.1040 × 2 = 0.2080 = 0.2080 with a carry of 0
0.2080 × 2 = 0.4160 = 0.4160 with a carry of 0
0.4160 × 2 = 0.8320 = 0.8320 with a carry of 0
0.8320 × 2 = 1.6640 = 0.6640 with a carry of 1
0.6640 × 2 = 1.3280 = 0.3280 with a carry of 1
We may stop here as the answer would be approximate.
 [0.6940]10 = [0.101100011]2
If more accuracy is needed, continue multiplying by 2 until you have as
many digits as necessary for your application.
Example 1.8: Convert 14.62510 to binary number.
Solution: First the integer part 14 is converted into binary and then, the fractional
part 0.625 is converted into binary as shown below:
Integer part Fractional part
14  2 =7 + 0 0.625 × 2 = 1.250 with a carry of 1
72 =3 + 1 0.250 × 2 = 0.500 with a carry of 0
3 2 =1 + 1 0.500 × 2 = 1.000 with a carry of 1
1 2 =0 + 1
 The binary equivalent is [1110.101]2
Octal to Decimal Conversion
An octal number can be easily converted to its decimal equivalent by multiplying
each octal digit by its positional weight.
Example 1.9: Convert (376)8 to decimal number.
Solution: The process is similar to binary to decimal conversion except that the
base here is 8.
[376]8 = 3 × 82 + 7 × 81 + 6 × 80
= 3 × 64 + 7 × 8 + 6 × 1 = 192 + 56 + 6 = [254]10
The fractional part can be converted into decimal by multiplying it by the
negative powers of 8. Self-Instructional
Material 11
Number System Example 1.10: Convert (0.4051)8 to decimal number.
Solution: [0.4051]8 = 4 × 8–1 + 0 × 8–2 + 5 × 8–3 + 1 × 8– 4
1 1 1 1
= 4  0  5  1
NOTES 8 64 512 4096
 [0.4051]8 = [0.5100098]10
Example 1.11: Convert (6327.45)8 to its decimal number.
Solution: [6327.45]8 = 6 × 83 + 3 × 82 + 2 × 81 + 7 × 80 + 4 × 8–1 + 5 × 8–2
= 3072 + 192 + 16 + 7 + 0.5 + 0.078125
[6327.45]8 = [3287.578125]10
Decimal to Octal Conversion
The methods used for converting a decimal number to its octal equivalent are the
same as those used to convert from decimal to binary. To convert a decimal number
to octal, we progressively divide the decimal number by 8, writing down the
remainders after each division. This process is continued until zero is obtained as
the quotient, the first remainder being the LSD.
The fractional part is multiplied by 8 to get a carry and a fraction. The new
fraction obtained is again multiplied by 8 to get a new carry and a new fraction.
This process is continued until the number of digits have sufficient accuracy.
Example 1.12: Convert [416.12]10 to octal number.
Solution: Integer part 416  8 = 52 + remainder 0 (LSD)

52  8 = 6 + remainder 4

6  8 = 0 + remainder 6 (MSD)
Fractional part 0.12 × 8 = 0.96 = 0.96 with a carry of 0
0.96 × 8 = 7.68 = 0.68 with a carry of 7
0.68 × 8 = 5.44 = 0.44 with a carry of 5
0.44 × 8 = 3.52 = 0.52 with a carry of 3
0.52 × 8 = 4.16 = 0.16 with a carry of 4
0.16 × 8 = 1.28 = 0.28 with a carry of 1
0.28 × 8 = 2.24 = 0.24 with a carry of 2
0.24 × 8 = 1.92 = 0.92 with a carry of 1
 [416.12]10 = [640.07534121]8
Example 1.13: Convert [3964.63]10 to octal number.

Self-Instructional
12 Material
Solution: Integer part 3964  8 = 495 with a remainder of 4 (LSD) Number System

495  8 = 61 with a remainder of 7


61  8 = 7 with a remainder of 5 NOTES
7  8 = 0 with a remainder of 7 (MSD)
 [3964]10 = [7574]8
Fractional part 0.63 × 8 = 5.04 = 0.04 with a carry of 5
0.04 × 8 = 0.32 = 0.32 with a carry of 0
0.32 × 8 = 2.56 = 0.56 with a carry of 2
0.56 × 8 = 4.48 = 0.48 with a carry of 4
0.48 × 8 = 3.84 = 0.84 with a carry of 3 [LSD]
 [3964.63]10 = [7574.50243]8
Note that the first carry is the MSD of the fraction. More accuracy can be
obtained by continuing the process to obtain octal digits.

Octal to Binary Conversion


Since 8 is the third power of 2, we can convert each octal digit into its 3-bit binary
form and from binary to octal form. All 3-bit binary numbers are required to
represent the eight octal digits of the octal form. The octal number system is often
used in digital systems, especially for input/output applications. Each octal digit
that is represented by 3 bits is shown in Table 1.5.
Table 1.5 Octal to Binary Conversion

Octal digit Binary equivalent


0 000
1 001
2 010
3 011
4 100
5 101
6 110
7 111
10 001 000
11 001 001
12 001 010
13 001 011
14 001 100
15 001 101
16 001 110
17 001 111 Self-Instructional
Material 13
Number System Example 1.14: Convert [675]8 to binary number.
Solution: Octal digit 6 7 5
  
NOTES
Binary 110 111 101
 [675]8 = [110 111 101]2
Example 1.15: Convert [246.71]8 to binary number.
Solution: Octal digit 2 4 6 . 7 1
    
Binary 010 100 110 111 001
 [246.71]8 = [010 100 110 . 111 001]2

Binary to Octal Conversion


The simplest procedure is to use the binary triplet method. The binary digits are
grouped into groups of three on each side of the binary point with zeros added on
either side if needed to complete a group of three. Then, each group of 3 bits is
converted to its octal equivalent. Note that the highest digit in the octal system is 7.
Example 1.16: Convert [11001.101011]2 to octal number.
Solution: Binary 11001.101011
Divide into groups of 3 bits 011 001 . 101 011
   
3 1 5 3
Note that a zero is added to the left-most group of the integer part. Thus, the
desired octal conversion is [31.53]8.
Example 1.17: Convert [11101.101101]2 to octal number.
Solution: Binary [11101.101101]2
Divide into groups of 3 bits 011 101 . 101 101
   
3 5 5 5
 [11101.101101]2 = [35.55]8

Hexadecimal to Binary Conversion


Hexadecimal numbers can be converted into binary numbers by converting each
hexadecimal digit to 4-bit binary equivalent using the code given in Table 1.2. If
the hexadecimal digit is 3, it should not be represented by 2 bits [11]2, but it
should be represented by 4 bits as [0011]2.
Self-Instructional
14 Material
Example 1.18: Convert [EC2]16 to binary number. Number System

Solution: Hexadecimal Number E C 2


  
Binary Equivalent 1110 1100 0010 NOTES
 [EC2]16 = [1110 1100 0010]2
Example 1.19: Convert [2AB.81]16 to binary number.
Solution: Hexadecimal Number
2 A B . 8 1
    
0010 1010 1011 1000 0001
 [2AB.81]16 = [0010 1010 1011 . 1000 0001]2

Binary to Hexadecimal Conversion


Conversion from binary to hexadecimal is easily accomplished by partitioning the
binary number into groups of four binary digits, starting from the binary point to
the left and to the right. It may be necessary to add zero to the last group, if it does
not end in exactly 4 bits. Each group of 4 bits binary must be represented by its
hexadecimal equivalent.
Example 1.20: Convert [10011100110]2 to hexadecimal number.
Solution: Binary Number [10011100110]2
Grouping the above binary number into 4-bits, we have
0100 1110 0110
Hexadecimal Equivalent   
4 E 6
 [10011100110]2 = [4E6]16
Example 1.21: Convert [111101110111.111011]2 to hexadecimal number.
Solution: Binary number [111101110111.111011]2
By grouping into 4 bits we have, 1111 0111 0111 . 1110 1100
     
Hexadecimal equivalent, F 7 7 . E C
 [111101110111.111011]2 = [F77.EC]16
The conversion between hexadecimal and binary is done in exactly the same
manner as octal and binary, except that groups of 4 bits are used.

Self-Instructional
Material 15
Number System Hexadecimal to Decimal Conversion
As in octal, each hexadecimal number is multiplied by the powers of 16, which
represents the weight according to its position and finally adding all the values.
NOTES Another way of converting a hexadecimal number into its decimal equivalent
is to first convert the hexadecimal number to binary and then convert from binary
to decimal.
Example 1.22: Convert [B6A]16 to decimal number.
Solution: Hexadecimal number [B6A]16
[B6A]16 = B × 162 + 6 × 161 + A × 160
= 11 × 256 + 6 × 16 + 10 × 1 = 2816 + 96 + 10 = [2922]10
Example 1.23: Convert [2AB.8]16 to decimal number.
Solution: Hexadecimal number,
[2AB.8]16 = 2 × 162 + A × 161 + B × 160 + 8 × 16–1
= 2 × 256 + 10 × 16 + 11 × 1 + 8 × 0.0625
 [2AB.8]16 = [683.5]10
Example 1.24: Convert [A85]16 to decimal number.
Solution: Converting the given hexadecimal number into binary, we have
A 8 5
[A85]16 = 1010 1000 0101
[1010 1000 0101]2 = 211 + 29 + 27 + 22 + 20 = 2048 + 512 + 128 + 4 + 1
 [A85]16 = [2693]10
Example 1.25: Convert [269]16 to decimal number.
Solution: Hexadecimal number,
2
[269]16 = 0010 6 9
0110 1001
[001001101001]2 = 29 + 26 + 25 + 23 + 20 = 512 + 64 + 32 + 8 + 1
 [269]16 = [617]10
or, [269]16 = 2 × 162 + 6 × 161 + 9 × 160 = 512 + 96 + 9 = [617]10
Example 1.26: Convert [AF.2F]16 to decimal number.
Solution: Hexadecimal number,
[AF.2F]16 = A × 161 + F × 160 + 2 × 16–1 + F × 16–2
= 10 × 16 + 15 × 1 + 2 × 16–1 + 15 × 16–2
= 160 + 15 + 0.125 + 0.0586
 [AF.2F]16 = [175.1836]10

Self-Instructional
16 Material
Decimal to Hexadecimal Conversion Number System

One way to convert from decimal to hexadecimal is the hex dabble method. The
conversion is done in a similar fashion, as in the case of binary and octal, taking the
factor for division and multiplication as 16. NOTES
Any decimal integer number can be converted to hex successively dividing
by 16 until zero is obtained in the quotient. The remainders can then be written
from bottom to top to obtain the hexadecimal results.
The fractional part of the decimal number is converted to hexadecimal number
by multiplying it by 16, and writing down the carry and the fraction separately.
This process is continued until the fraction is reduced to zero or the required
number of significant bits is obtained.
Example 1.27: Convert [854]10 to hexadecimal number.
Solution: 854  16 = 53 + with a remainder of 6
53  16 = 3 + with a remainder of 5
3  16 = 0 + with a remainder of 3
 [854]10 = [356]16
Example 1.28: Convert [106.0664]10 to hexadecimal number.
Solution: Integer part
106  16 = 6 + with a remainder of 10
6  16 = 0 + with a remainder of 6
Fractional part
0.0664 × 16 = 1.0624 = 0.0624 + with a carry of 1
0.0624 × 16 = 0.9984 = 0.9984 + with a carry of 0
0.9984 × 16 = 15.9744 = 0.9744 + with a carry of 15
0.9744 × 16 = 15.5904 = 0.5904 + with a carry of 15
Fractional part [0.0664]10 = [0.10FF]16
Thus, the answer is [106.0664]10 = [6A.10FF]16
Example 1.29: Convert [65, 535]10 to hexadecimal and binary equivalents.
Solution: (i) Conversion of decimal to hexadecimal number
65,535  16 = 4095 + with a remainder of F
4095  16 = 255 + with a remainder of F
255  16 = 15 + with a remainder of F
15  16 = 0 + with a remainder of F
 [65535]10 = [FFFF]16

Self-Instructional
Material 17
Number System (ii) Conversion of hexadecimal to binary number
F F F F
1111 1111 1111 1111
NOTES  [65535]10 = [FFFF]16 = [1111 1111 1111 1111]2
A typical microcomputer can store up to 65,535 bytes. The decimal
addresses of these bytes are from 0 to 65,535. The equivalent binary addresses
are from
0000 0000 0000 0000 to 1111 1111 1111 1111
The first 8 bits are called the upper byte and second 8 bits are called lower
byte.
When the decimal is greater than 255, we have to use both the upper byte
and the lower byte.
Hexadecimal to Octal Conversion
This can be accomplished by first writing down the 4-bit binary equivalent of
hexadecimal digit and then partitioning it into groups of 3 bits each. Finally, the 3-
bit octal equivalent is written down.
Example 1.30: Convert [2AB.9]16 to octal number.
Solution: Hexadecimal number 2 A B . 9
   
4 bit numbers 0010 1010 1011 . 1001
3 bit pattern 001 010 101 011 . 100 100
     
Octal number 1 2 5 3 . 4 4
 [2AB.9]16 = [1253.44]8
Example 1.31: Convert [3FC.82]16 to octal number.
Solution: Hexadecimal number 3 F C . 8 2
4 bit binary numbers 0011 1111 1100 . 1000 0010
3 bit pattern 001 111 111 100 . 100 000 100
      
Octal number 1 7 7 4 . 4 0 4
[3FC.82]16 = [1774.404]8
Notice that zeros are added to the rightmost bit in the above two examples
to make them group of 3 bits.
Octal to Hexadecimal Conversion
It is the reverse of the above procedure. First the 3-bit equivalent of the octal digit
is written down and partitioned into groups of 4 bits, then the hexadecimal equivalent
Self-Instructional of that group is written down.
18 Material
Example 1.32: Convert [16.2]8 to hexadecimal number. Number System

Solution: Octal number 1 6 . 2


  
3 bit binary 001 110 . 010 NOTES
4 bit pattern 1110 . 0100
 
Hexadecimal E . 4
 [16.2]8 = [E.4]16
Example 1.33: Convert [764.352]8 to hexadecimal number.
Solution: Octal number 7 6 4 . 3 5 2
3 bit binary 111 110 100 . 011 101 010
4 bit pattern 0001 1111 0100 . 0111 0101 000
     
Hexadecimal number 1 F 4 . 7 5 0
 [764.352]8 = [1F4.75]16

Integers and Fractions

Binary Fractions
A binary fraction can be represented by a series of 1 and 0 to the right of a binary
point. The weights of digit positions to the right of the binary point are given by
2–1, 2–2, 2–3 and so on.
For example, the binary fraction 0.1011 can be written as,
0.1011 = 1 × 2–1 + 0 × 2–2 + 1 × 2–3 + 1 × 2– 4
= 1 × 0.5 + 0 × 0.25 + 1 × 0.125 + 1 × 0.0625
(0.1011)2 = (0.6875)10
Mixed Numbers
Mixed numbers contain both integer and fractional parts. The weights of mixed
numbers are,
23 22 21 . 2–1 2–2 2–3 etc.

Binary Point
For example, a mixed binary number 1011.101 can be written as,
(1011.101)2 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20 + 1 × 2–1 + 0 × 2–2 + 1 × 2–3
= 1 × 8 + 0 × 4 + 1 × 2 + 1 × 1 + 1 × 0.5 + 0 × 0.25 + 1 × 0.125
 [1011.101]2 = [11.625]10
Self-Instructional
Material 19
Number System When different number systems are used, it is customary to enclose the
number within big brackets and the subscripts indicate the type of the number
system.

NOTES
Check Your Progress
1. What is the base or radix of a number?
2. What is decimal number system?
3. What is octal number system?
4. Why double dabble method is used?

1.3 BINARY ARITHMETIC

Arithmetic operations are performed in computers not by using decimal numbers,


as we do normally, but by using binary numbers. Arithmetic circuits in computers
and calculators perform arithmetic and logic operations. All arithmetic operations
take place in the arithmetic unit of a computer. The electronic circuit is capable of
doing addition of two or three binary digits at a time and the binary addition alone
is sufficient to do subtraction. Thus, a single circuit of a binary adder with suitable
shift register can perform all the arithmetic operations.
Arithmetic operations such as addition, subtraction, multiplication and division
can be performed on binary numbers.
1.3.1 Binary Addition
Binary addition is performed in the same manner as decimal addition. Binary addition
is the key to binary subtraction, multiplication and division. There are only four
cases that occur in adding the two binary digits in any position. This is shown in
Table 1.6.
(i) 1 + 1 + 1 = 11 (i.e., 1 carry of 1 into next position)
(ii) 1 + 1 + 1 + 1 = 100
(iii) 10 + 1 = 11
The rules of (1), (2) and (3) in Table 1.6 are just decimal addition. The rule (4)
states that adding 1 and 1 gives one 0 (meaning decimal 2 and not decimal 10).
There is a carry from the previous position. ‘Carry overs’ are performed in
the same manner as in decimal arithmetic. Since 1 is the larger digit in the binary
system, any sum greater than 1 requires that a digit be carried out.

Self-Instructional
20 Material
Table 1.6 Binary Addition Number System

Sl. No. Augend Addend Carry Sum Result


(A) + (B) (C) (S)

1 0 + 0 0 0 0 NOTES
2 0 + 1 0 1 1
3 1 + 0 0 1 1
4 1 + 1 1 0 10

Example 1.34: Add the binary numbers (i) 011 and 101, (ii) 1011 and 1110,
(iii) 10.001 and 11.110, (iv) 1111 and 10010, and (v) 11.01 and 101.0111.
Solution: (i) Binary number Equivalent decimal number
11  Carry
011 3
+ 101 5
Sum = 1000 8

(ii) Binary Decimal (iii) Binary Decimal


11  Carry 1  Carry
1011 11 10.001 2.125
+ 1110 + 14 + 11.110 + 3.750
Sum = 11001 25 Sum = 101.111 5.875

(iv) Binary Decimal (v) Binary Decimal


11  Carry 11 1  Carry
1111 15 11.01 3.25
+ 10010 + 18 101.0111 + 5.4375
Sum = 100001 33 Sum = 1000.1011 8.6875

Since the circuit in all digital systems actually performs addition that can handle
only two numbers at a time, it is not necessary to consider the addition of more
than two binary numbers. When more than two numbers are to be added, the first
two are added together and then their sum is added to the third number, and so
on. Almost all modern digital machines can perform addition operation in less than
1 s.
Logic equations representing the sum is also known as the exclusive OR function
and can be represented also in Boolean ring algebra as S = AB BA = A  B.

1.3.2 Binary Subtraction


Subtraction is the inverse operation of addition. To subtract, it is necessary to
establish a procedure for subtracting a large digit from a small digit. The only case
in which this occurs with binary numbers is when 1 is subtracted from 0. The
Self-Instructional
Material 21
Number System remainder is 1, but it is necessary to borrow 1 from the next column to the left.
The rules of binary subtraction are shown in Table 1.7.
1. 0 – 0 = 0
NOTES 2. 1 – 0 = 1
3. 1 – 1 = 0
4. 0 – 1 = 0 with a borrow of 1
5. 10 – 1 = 01
Table 1.7 Binary Subtraction
Sl. No. Minuend _ Subtrahend Result
A B
1 0 – 0 0
2 0 – 1 0 with a borrow of 1
3 1 – 0 1
4 1 – 1 0

Example 1.35: (i) Binary Decimal (ii) Binary Decimal


Solution: 1001 9 10000 16
– 101 –5 – 011 –3
Difference = 100 4 1101 13

(iii) Binary Decimal (iv) Binary Decimal


110.01 6.25 1101 13
– 100.1 – 4.5 – 1010 – 10
1.11 1.75 0011 3
Example 1.36: Show the binary subtraction of 12810 from 21010.
Solution: Converting the given decimal numbers into the corresponding
hexadecimal number we have,
210  D 2 H  1101 0010
128  8 0 H  1000 0000
1101 0010 D2H
– 1000 0000 –80H
0101 0010 52H

1.3.3 Binary Multiplication


Multiplication of binary numbers is performed in the same manner as the
multiplication of decimal numbers. The following are the four basic rules for
multiplying binary digits:

Self-Instructional
22 Material
1. 0 × 0 = 0 Number System

2. 0 ×1 = 0
3. 1 × 0 = 0
4. 1 × 1 = 1 NOTES
In a computer, the multiplication operation is performed by repeated additions,
in much the same manner as the addition of all partial products to obtain the full
product. Since the multiplier digits are either 0 or 1, we always multiply by 0 or 1
and no other digit.
Example 1.37: Multiply the binary numbers 1011 and 1101.
Solution: 1011  Multiplicant = 1110
× 1101  Multiplier = ×1310
14310
1011
0000 Partial product  14310
1011
1011
10001111  Final product  14310

1.3.4 Binary Division


The processes of dividing one binary number (the dividend) by another (the divisor)
is the same as that which is followed for decimal numbers which we usually refer
to as the method of ‘long division’. The rules for binary division are as follows:
1. 01 = 0
2. 11 = 1
3. 0  0 = No meaning as in decimal system
4. 1  0 = No meaning
In considering division, we will assume that the dividend is larger than the
divisor. The following are the steps for binary division:
1. Start from the left or the dividend.
2. Perform a series of subtractions in which the divisor is subtracted from the
dividend.
3. If subtraction is possible, put a 1 in the quotient and subtract the divisor
from the corresponding digits of the dividend.
4. If subtraction is not possible (divisor greater than remainder), record a zero
in the quotient. Bring down the next digit to add to the remainder digits.
Proceed as before in a manner similar to long division.

Self-Instructional
Material 23
Number System
1.4 COMPLEMENTS

Complements are used in digital computer to simplify the subtraction operation,


NOTES that is, to easily represent the negative number, and for logical manipulation.
There are two types of complement for each base system:
1. The radix (i.e. r’s) complement
2. The diminished radix [i.e. (r-1)’s] complement.
All (r-1)’s complement subtract the given number from the maximum possible
numbers in the given base. All r’s complements are obtained by adding 1 to the (r
- 1)’s complement.
To determine r’s complement, first write (r - 1) complement and then add 1
to least significant bit (LSB), that is, most right side bit.
Some examples of the complements are mentioned here under:
r 1's
r
r's complement

1's complement
Binary
2's

7's complement
Octal
8's complement

9's complement
Decimal
10's complement

15's complement
Hexadecimal
16's complement

Binary Number in Complement Form


The 1’s complement of a binary number is obtained by complementing all its bits,
that is, by replacing 0s with 1s and 1s with 0s. The 2’s complement of a binary
number is obtained by adding ‘1’ to its 1’s complement.
Example 1.38
1. 1’s complement of binary number (101101)2 is
111111
101101
010010 2

Self-Instructional
24 Material
01101 Number System

010010
Charging1's to 0's and 0's to 1's
2. 2’s complement of binary number (1 0 1 0 0)2 is NOTES
01011
1
2's 01100
3. The 1’s complement of (10010110)2 is (01101001)2.
4. The 2’s complement of (10010110)2 is (01101010)2.
5. 2’s complement of binary number (11001.11)2 is
1's 00110.00
1
2's 00110.01

Octal Number in Complement Form


In the octal number systems, we have the 7’s and 8’s complements. The 7’s
complement of a given octal number is obtained by subtracting each octal digit
from 7. The 8’s complement is obtained by adding ‘1’ to the 7’s complement.
Example 1.39
1. 7’s complement of octal number (543)8 is
777
543
234 8
2. 8th complement of (2470)8 is
7777
2470
1's 5307
1
2's 5310
3. The 7’s complement of (562)8 would be (215)8.
4. The 8’s complement of (562)8 would be (216)8.
Decimal Number in Complement Form
In the decimal number systems, we have the 9’s and 10’s complements. The 9’s
complement of a given decimal number is obtained by subtracting each digit from
9. The 10’s complement is obtained by adding ‘1’ to the 9’s complement.

Self-Instructional
Material 25
Number System Example 1.40
1. 9’s complement of decimal number (567)10 is
999
NOTES 567
432 10
2. 10th complement of (5370)10 is
9999
5370
4629
1
4630
3. The 9’s complement of (2496)10 would be (7503)10.
4. The 10’s complement of (2496)10 is (7504)10.
Hexadecimal Number in Complement Form
The 15’s and 16’s complements are defined with respect to the hexadecimal number
system. The 15’s complement is obtained by subtracting each hex digit from 15.
The 16’s complement is obtained by adding ‘1’ to the 15’s complement.
Example 1.41
1. 15’s complement of hexadecimal number (2789) 16 is
FFFF
2 7 89
D 8 76 16

2. 16th complement of (5279)16 is


F F F F
5 2 7 9
15's AD86
1
16's AD87
3. The 15’s complement of (3BF)16 would be (C40)16.
4. The 16’s complement of (2AE)16 would be (D52)16.

1.5 NUMERIC AND CHARACTER CODES

Representing numbers within the computer circuits, registers and the memory unit
by means of Electrical signals or Magnetism is called NUMERIC CODING. In
the computer system, the numbers are stored in the Binary form, since any number
Self-Instructional
26 Material
can be represented by the use of 1’s and 0’s only. Numeric codes are divided into Number System

two categories i.e. weighted codes and non-weighted codes. The different type of
the Weighted Codes are:
(i) BCD Code,
NOTES
(ii) 2-4-2-1 Code,
(iii) 4-2-2-1 Code,
(iv) 5-2-1-1 Code,
(v) 7 – 4 – 2 – 1 Code, and
(vi) 8-4-2-1 Code.
The Non-Weighted Codes are of two types i.e.
(i) Non-Error Detecting Codes and,
(ii) Error Detecting Codes.
Character codes
Alphanumeric codes are also called character codes. These are binary codes
which are used to represent alphanumeric data. The codes write alphanumeric
data including letters of the alphabet, numbers, mathematical symbols and
punctuation marks in a form that is understandable and processable by a computer.
All these codes are discussed in detail in unit 6.

Check Your Progress


5. What are the different types of arithmetic operations?
6. What are the two types of complement for each base system?

1.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The base or radix of a number is defined as the number of different digits


which can occur in each position in the number system.
2. The number system which utilizes ten distinct digits, i.e., 0, 1, 2, 3, 4, 5, 6,
7, 8 and 9 is known as decimal number system. It represents numbers in
terms of groups of ten.
3. A number system that uses eight digits, 0, 1, 2, 3, 4, 5, 6 and 7 is called an
octal number system.
4. A popular method known as double dabble method, also known as divide-
by two method is used to convert a large decimal number into its binary
equivalent. In this method, the decimal number is repeatedly divided by 2
and the remainder after each division is used to indicate the coefficient of
the binary number to be formed.
Self-Instructional
Material 27
Number System 5. Arithmetic operations such as addition, subtraction, multiplication and
division can be performed on binary numbers.
6. There are two types of complement for each base system.
NOTES (i) The radix (i.e. r’s) complement
(ii) The diminished radix [i.e. (r –1)’s] complement.

1.7 SUMMARY

 A number of base, or radix r, is a system that uses distinct symbols of r


digits. Numbers are represented by a string of digit symbols. To determine
the quantity that the number represents, it is necessary to multiply each digit
by an integer power of r and then form the sum of all the weighted digits.
 The number system which utilizes ten distinct digits, i.e., 0, 1, 2, 3, 4, 5, 6,
7, 8 and 9 is known as decimal number system.
 A number system that uses only two digits, 0 and 1 is called the binary
number system. The binary number system is also called a base two system.
 Binary number system is used in digital computers because all electrical and
electronic circuits can be made to respond to the two states concept.
 A number system that uses eight digits, 0, 1, 2, 3, 4, 5, 6 and 7, is called an
octal number system. It has a base of eight.
 The hexadecimal system groups numbers by sixteen and powers of sixteen.
Hexadecimal numbers are used extensively in microprocessor work.
 Arithmetic operations such as addition, subtraction, multiplication and
division can be performed on binary numbers.
 Complements are used in digital computer to simplify the subtraction
operation, that is, to easily represent the negative number, and for logical
manipulation.

1.8 KEY WORDS

 Decimal number system: The number system that utilizes ten distinct
digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. It is a base 10 system.
 Binary number system: A number system that uses only two digits, 0 and
1 known as bits or binary digits. It has a base 2 system.
 Nibble: A binary number with 4 bits.
 Octal number system: A number system that uses eight digits, 0, 1, 2, 3,
4, 5, 6 and 7. It has base 8.
 2421 code: This is a weighted code and its weights are 2, 4, 2 and 1.

Self-Instructional
28 Material
Number System
1.9 SELF ASSESSMENT QUESTIONS AND
EXERCISES

Short Answer questions NOTES

1. What are the different types of number systems?


2. What is the advantage of using binary number system?
3. How do you perform addition and subtraction using 1’s and 2’s complement?
Illustrate with examples.
4. Convert the following numbers to their binary equivalents:
(a) 37 (b) 14 (c) 167 (d) 72.45
5. Convert the following decimal numbers to equivalent binary numbers:
(a) 43 (b) 64 (c) 4096 (d) 0.375
Long Answer questions
1. Convert the following from binary to octal:
(a) 101101 (b) 101101110 (c) 10110111 (d) 11010.011
2. Convert the following from decimal to octal and then to binary:
(a) 59 (b) 0.58 (c) 64.2 (d) 199.3
3. Convert each octal number to binary:
(a) 158 (b) 248 (c) 1678 (d) 2348
(e) 1738 (f) 1578 (g) 46538 (h) 17238 (i) 26458
4. Convert each hexadecimal number to binary:
(a) 4916 (b) 32416 (c) 64916 (d) ABC16
5. Convert each binary number to hexadecimal:
(a) 11001102 (b) 111011112 (c) 1011110101.01112
6. Convert each decimal number to hexadecimal:
(a) 1610 (b) 1910 (c) 3510 (d) 43910
7. Convert the following binary numbers to octal and then to hexadecimal:
(a) 101100110011 (b) 1011101.1011
8. Convert each hexadecimal number to decimal:
(a) 4916 (b) 63216 (c) 5416 (d) AB016

Self-Instructional
Material 29
Number System
1.10 FURTHER READINGS

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
NOTES Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.

Self-Instructional
30 Material
Boolean Algebra and

UNIT 2 BOOLEAN ALGEBRA AND Combinational Circuits

COMBINATIONAL
NOTES
CIRCUITS
Structure
2.0 Introduction
2.1 Objectives
2.2 Logic Gates and inverter
2.2.1 AND Gate
2.2.2 OR Gate
2.2.3 NAND Gate
2.2.4 NOR Gate
2.2.5 Exclusive OR (XOR) Gates
2.2.6 Exclusive NOR Gates
2.3 Boolean Algebra and Logic Simplification
2.3.1 Laws and Rules of Boolean Algebra
2.3.2 De-Morgan’s Theorems
2.3.3 Simplification of Logic Expressions using Boolean Algebra
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings

2.0 INTRODUCTION

In this unit, you will learn about the logic gates and Boolean algebra. A logic gate
is an electronic circuit that makes logic decisions. Logic gates have only one output
and two or more inputs - except for the NOT gate, which has only one input. The
output signal appears only for certain combinations of input signals. Gates do the
manipulation of binary information. To make logic decisions, three basic logic circuits
(called gates) are used: the OR circuit, the AND circuit and the NOT circuit.
Logic gates are building blocks, which are available in the form of various IC
families. Gates are blocks of hardware that produce signals of binary 1 or 0 when
the logic input requirements are satisfied. Each gate has a distinct graphic symbol
and its operation can be described by means of an algebraic function. Logic gates
provide a simple and straight-forward method of minimizing Boolean expressions.

Self-Instructional
Material 31
Boolean Algebra and
Combinational Circuits 2.1 OBJECTIVES

After going through this unit, you will be able to:


NOTES  Discuss the various types of logic gates
 Understand the three basic logic operations
 Understand exclusive OR (XOR) gates
 Explain exclusive NOR gates
 Understand De-Morgan's theorems

2.2 LOGIC GATES AND INVERTER

A logic gate is an electronic circuit, which makes logical decisions. To arrive at


these decisions, the most common logic gates used are OR, AND, NOT, NAND
and NOR gates. The NAND and NOR gates are called as the universal gates.
The exclusive-OR gate is another logic gate, which can be constructed using basic
gates, such as AND, OR and NOT gates.
Logic gates have two or more inputs and only one output except for the
NOT gate, which has only one input. The output signal appears only for certain
combinations of the input signals. The manipulation of binary information is done
by the gates. The logic gates are the building blocks of hardware which are available
in the form of various IC families. Each gate has a distinct logic symbol and its
operation can be described by means of an algebraic function. The relationship
between input and output variables of each gate can be represented in a tabular
form called a truth table.
An inverter performs the function of negation on signals and negates, the
Boolean expression of the input signals. Boolean algebra is a system of mathematical
logic, using the function AND, NOT and OR.
2.2.1 AND Gate
An AND gate has two or more inputs and a single output, and it operates in
accordance with the following definition: The AND gate is defined as an electronic
circuit in which all the inputs must be HIGH in order to have a HIGH output.
The truth table for the 2-input AND gate is shown in Table 2.1. It is seen
that the AND gate has a HIGH output only when both A and B are HIGH. When
there are more inputs, all inputs must be HIGH for a HIGH output. For this reason,
the AND gate is also called ALL GATE. The truth table for the 3-input AND gate
is shown in Table 2.2.

Self-Instructional
32 Material
Table 2.1 2-Input AND Gate Table 2.2 3-Input AND Gate Boolean Algebra and
Combinational Circuits
Inputs Output Inputs Output

A B Y A B C Y
0 0 0 0 0 0 0 NOTES
0 1 0 0 0 1 0
1 0 0 0 1 0 0
1 1 1 0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1

Logic Symbol: The schematic symbols of 2-input, 3-input and 4-input


AND gates are shown symbolically in Figure 2.1.
Y=ABC A Y=ABCD
Y=AB A B
A B
C
B C D
(a) (b) (c)

Fig. 2.1 Schematic Symbols of AND Gate

2.2.2 OR Gate
The OR gate is a digital logic gate that implements logical disfunction. A basic circuit
has two or more inputs and a single output and it operates in accordance with the
following definition: The output of an OR gate assumes state 1 if one or more (all)
inputs assume state 1.
From the truth table it can be seen that all switches must be opened (0 state)
for the light to be off (output 0 state). This type of circuit is called an OR gate.
Table 2.3 shows the truth table of three input or gates.
Table 2.3 Truth Table of Three-Input OR Gates

Inputs Outputs
A B C Y=A+B+C
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
Self-Instructional
Material 33
Boolean Algebra and Table 2.4 is the truth table for two-input OR gate. The OR gate is an ANY
Combinational Circuits
OR ALL gate; an output occurs when any or all of the inputs are high. Table 2.5
shows binary equivalent details in which A and B are determined for inputs and Y
= A+B is expressed for output.
NOTES
Table 2.4 Two-Input OR Gate Table 2.5 Binary Equivalent

Inputs Output Inputs Output

A B Y=A+B A B Y=A+B
Low Low Low 0 0 0
Low High High 0 1 1
High Low High 1 0 1
High High High 1 1 1

In general, if n is the number of input variables, then there will be 2n possible


cases, since each variable can take on either of two values.
Y=A+B Y=A+B+C
A
A
B
B C
A Y=A+B+C+D
B

C
D
(a) (b) (c)

Fig. 2.2 Schematic Symbols of OR Gate

Logic Symbol: The schematic symbols of an OR gate for two-input, three-


input and four-input are shown in Figure 2.2.
2.2.3 NAND Gate
The word NAND is the contraction of NOT - AND. A negation following an
AND gate is called a NOT-AND or a NAND gate. A NAND gate is the cascade
combination of AND and NOT gates. It is just an AND gate followed by an
inverter. The NAND operation is the complement of the AND operation and is
defined by
Y = A. B
The NAND operation is also called a Sheffer stroke. An alternative symbol
for it is
Y = A  B = A. B
The graphic symbol for NAND function is shown in Figure 2.3. The NAND
gate first performs the AND operation on the inputs, and then performs the NOT
operation on the AND product.

Self-Instructional
34 Material
Y = AB Boolean Algebra and
A A Y = AB Combinational Circuits
B B

Fig. 2.3 Logic Symbol of NAND Gate


NOTES
Read the expression Y = AB as Y equals ‘NOT A AND B’ or ‘Y equals the
complement of A AND B’.
Table 2.6 shows that the NAND gate output in each case is the inverse of
the AND output. The NAND gate produces a LOW output only when all its
inputs are HIGH. This same type of operation can be extended to NAND gates
with more than two inputs.
Table 2.6 Truth Table for NAND Gate

Inputs Outputs
A B NAND Operation AND Operation
Y = AB Y = AB
0 0 1 0
0 1 1 0
1 0 1 0
1 1 0 1

The truth table also explains if one of the inputs is at logic 0, whatever be the
other input [(a) 0 and 0 and (b) 0 and 1], the NAND gate is disabled, i.e., closed
as in AND gate and it does not allow the input signal to pass through, and therefore,
the output remains at logic 1 as shown in Figure 2.4(a). In the case of AND gate,
the output remains at logic 0 for the above condition. On the other hand, when one
of the inputs is at logic 1, whatever be the other input [(c) 1 and 0 and (d) 1 and 1],
the NAND gate is enabled, i.e., opened as in AND gate and it allows the input
signal to pass through. At the output, we get the complement of the input signal,
shown in Figure 2.4(b).

A=0 A=1 Y=
Y=1

B= B=

(a) (b)

Fig. 2.4

2.2.4 NOR Gate


The word NOR is contracted from NOT and OR. The negation of the OR function
is called NOT - OR or NOR. A NOR gate is the cascade combination of NOT and
OR gates. The NOR operation is the complement of the OR operation and is
defined by,
Self-Instructional
Material 35
Boolean Algebra and
Combinational Circuits
Y= A B

The NOR operation is also called the fierce arrow operation and is
symbolically represented by,
NOTES —
Y = AB = A B
The NOR gate first performs the OR operation on the inputs, and then
performs the NOT operation on the OR sum. Read the expression as ‘Y equals
NOT A OR B’ or ‘Y equals the complement of A OR B’.
Logic Symbol: The schematic symbol of the NOR gate is the OR symbol
with a small circle on the output. The small circle represents the operation of
inversion. It is shown in Figure 2.5.
Y=A+B
A A Y=A+B
B B

Fig. 2.5 Logic Symbol of NOR Gate

Truth Table: Table 2.7 (Truth table) shows that the NOR output in each
case is the inverse of the OR output. The same operation can be extended to
NOR gates with more than two inputs.
Table 2.7 NOR Gate
Inputs Outputs Y

A B NOR Operation OR Operation


A B A+B

(a) 0 0 1 0
(b) 0 1 0 1
(c) 1 0 0 1
(d) 1 1 0 1

2.2.5 Exclusive OR (XOR) Gates


An Exclusive OR function obeys the definition that the output of a two-input XOR
assumes the logic state 1 if only one input assumes the logic state 1. This is
equivalent to saying that the output is a logic 1, if the input A or input B is a logic 1
exclusively, i.e., when they are not 1 simultaneously. An exclusive OR gate is
made up of AND, OR and NOT gates connected as shown in 2.6(a). The exclusive
OR function can be written as,
Y = A  B = AB AB

Read the expression as ‘Y equals A XOR B’.

Self-Instructional
36 Material
Boolean Algebra and
A
A AB Combinational Circuits

B Y = AB + BA A
Y=A+B
NOTES

A B
(b) Logic symbol
B BA
B
(a) XOR Gate using Basic Gates XOR Gate

Fig. 2.6 XOR Gate

The logic symbol for the XOR gate is shown in Figure 2.6(b) and the truth
table for the XOR operation is given in Table 2.8.
Table 2.8 Truth Table for XOR Operation

Inputs Output

A B Y=AB
0 0 0
0 1 1
1 0 1
1 1 0

The truth table of the XOR gate shows the output is HIGH when any, but
not all, of the inputs is at 1. This exclusive feature eliminates a similarity to the OR
gate. The XOR gate responds with a HIGH output only when an odd number of
inputs is HIGH. When there is an even number of HIGH inputs, such as two or
four, the output will always be LOW.
Note the unique XOR symbol with a circle around the + symbol of OR.
Read the expression as ‘Y equals A exclusively ORed with B’.
A (A + B)
B
Y = (A + B)(AB) A Y
B

AB AB

Fig. 2.7 Exclusive OR Gate

In equation form, the definition of XOR function can be written as


Y = ( A B) AB d i

Self-Instructional
Material 37
Boolean Algebra and which in the statement form is read as ‘if A = 1 OR B = 1 but NOT
Combinational Circuits
simultaneously, then Y = 1’. This function is implemented in logic form as shown in
Figrue 2.7.

NOTES XOR Laws


1. A0 = A
2. A1 = A
3. AB = BA
4. AA = 0
5. A A = 1
6. (A  B)  C = A  (B  C)
These laws can be verified by assigning values 1 and 0 to the inputs A and B.
A  A  A ...  A = 0 if total number of terms is even.
= A if total number of terms is odd.
A A A ... A =1 if total number of terms is even.
= A if total number of terms is odd.
A  1  1  ...  1 = A if total number of terms is even.
= A if total number of terms is odd.
A  0  0  ...  0 = A
The standard and low power series 54/74 include quadruple two-input
XOR gates (SN 54/7486 and SN 54L/74L86). The two-input XOR gate may be
implemented using NAND gates as shown in Figure 2.8.
A

AB
AB
A Y = (AB)(AB)

B
AB
AB

A Y = AB + BA
B

Fig. 2.8 XOR Gate using NAND Gates

It should be noted that the same truth table applies when adding two binary
digits (bits). A 2-input XOR circuit is, therefore, sometimes called a modulo-2
order or a half-adder. The name half-adder refers to the fact that a possible
carry-bit, resulting from an addition of two preceding bits, has not been taken into
Self-Instructional
38 Material
account. A full additioin is performed by a second XOR circuit with the output Boolean Algebra and
Combinational Circuits
signal of the first circuit and the carry as input signals.
A A+B
B NOTES

C
Y=A+B+C
Fig. 2.9 Cascading of Two XOR Circuits

The configuration of Figure 2.9 is a cascading of two XOR circuits, resulting


in an XORing of three variables A, B and C. Consequently, the sum output of a full
adder for two bits is an XORing of the 2 bits to be added and the carry of the
preceding adding stage. The logic expression of an XORing of the three variables
A, B and C is,
A  B  C = ( AB AB )C ( AB AB) C

= ( AB AB )C ( AB )( AB) C

= (AB + AB)C + (A + B)(A + B)C

A  B  C = ABC ABC AB C ABC


In general, an XORing of n variables results in a logical 1 output if an odd
number of the input variables are 1s. An XORing of n variables may be obtained
by cascading two-input XOR circuits.
Studying the first and second properties of Table 2.9, we notice that it is
simple using an XOR gate to cause a logic variable, to become complemented or
to allow it to pass through the gate unchanged.
Table 2.9 Truth Table for XOR Operation

Logic Control Output


variable signal Y
input
A 0 A
A 1 A
This is done by using one XOR input as a control input and the other as the
logic variable input, as shown in Figure 2.10.
Logic variable input
A
Y

Control Signal
Fig. 2.10 Control Input and Logic Variable Input Self-Instructional
Material 39
Boolean Algebra and 2.2.6 Exclusive NOR Gates
Combinational Circuits
The exclusive NOR circuit, abbreviated XNOR, is the last of the seven basic logic
gates. The XNOR gate is followed by an inverter. The XNOR output is LOW when
the inputs have an odd number of 1s. The graphic symbol of XNOR gate is shown
NOTES in Figure 2.11.
A Y=A+B A Y=A+B
B B

Fig. 2.11 Schematic Symbol of XNOR Gate

The truth table is given in Table 2.10. Note, in the output column that Y is
the complement of output of the XOR gate. The Boolean expression for the XNOR
gate is,
Y=A B AB AB
Read the expression as ‘Y equals A exclusively NORed with B’.
According to De Morgan’s theorem, AB AB AB AB

AB AB = ( A B) . (A B) AB AB
The 2-input XNOR gate is immensely useful for bit comparison and it
recognizes when the two inputs are identical. Hence, this gate is also called the
comparator or the coincidence circuit. XNOR gate is also used as an even parity
generator.
Table 2.10 Truth Table for XNOR Gate

Inputs Output

A B Y= A B
0 0 1
0 1 0
1 0 0
1 1 1

Check Your Progress


1. What is logic gate?
2. What is AND gate?
3. What is known as complementary circuit?
4. What is NAND gate?
5. Define NOR gate.
6. How NOR gate is symbolized schematically?
Self-Instructional
40 Material
Boolean Algebra and
2.3 BOOLEAN ALGEBRA AND LOGIC Combinational Circuits

SIMPLIFICATION

Boolean algebra or Boolean logic was developed by English mathematician George NOTES
Boole. It is considered as a logical calculus of truth values and resembles the algebra
of real numbers along with the numeric operations of multiplication xy, addition
x + y, and negation ¬x substituted by the respective logical operations of conjunction
x y, disjunction x y and complement ¬x. These set of rules explain specific
propositions whose result would be either true (1) or false (0). In digital logic, these
rules are used to define digital circuits whose state can be either 1 or 0.
Boolean logic forms the basis for computation in contemporary binary computer
systems. Using Boolean equations, any algorithm or any electronic computer circuit
can be represented. Even one Boolean expression can be transformed into an
equivalent expression by applying the theorems of Boolean algebra. This helps in
converting a given expression to a canonical or standardized form and minimizing
the number of terms in an expression. By minimizing terms and expressions the
designer can use less number of electric components while creating electrical circuits
so that the cost of system can be reduced. Boolean logical operations are performed
to simplify a Boolean expression using the following basic and derived operations.
Basic Operations: Boolean algebra is specifically based on logical
counterparts to numeric operations multiplication xy, addition x + y, and negation –x,
namely conjunction x y (AND), disjunction x y (OR) and complement or negation
¬x (NOT). In digital electronics, the AND is represented as a multiplication, the OR
is represented as an addition and the NOT is denoted with a post fix prime, for
example A which means NOT A. Conjunction is the closest of these three operations.
As a logical operation the conjunction of two propositions is true when both
propositions are true and false otherwise. Disjunction works almost like addition
with one exception, i.e., the disjunction of 1 and 1 is neither 2 nor 0 but 1. Hence, the
disjunction of two propositions is false when both propositions are false and true
otherwise. The disjunction is also termed as the dual of conjunction. Logical negation,
however, does not work like numerical negation. It corresponds to incrementation,
i.e., ¬x = x+1 mod 2. An operation with this property is termed an involution. Using
negation we can formalize the notion that conjunction is dual to disjunction as per
De Morgan’s laws, ¬(x y) = ¬x ¬y and ¬(x y) = ¬x ¬y. These can also be
construed as definitions of conjunction in terms of disjunction and vice versa: x y
= ¬(¬x ¬y) and x y = ¬(¬x ¬y).
Derived operations: Other Boolean operations can be derived from these
by composition. For example, implication xy of is a binary operation, which is
false when x is true and y is false, and true otherwise. It can also be expressed as
xy = ¬x y or equivalently ¬(x ¬y). In Boolean logic this operation is termed as
material implication, which distinguishes it from related but non-Boolean logical
concepts. The basic concept is that an implication xy is by default true.
Boolean algebra, however, does have an exact counterpart called eXclusive-
OR (XOR) or parity, represented as x y. The XOR of two propositions is true only Self-Instructional
Material 41
Boolean Algebra and when exactly one of the propositions is true. Further, the XOR of any value with
Combinational Circuits
itself vanishes, for example x x = 0. Its digital electronics symbol is a hybrid of the
disjunction symbol and the equality symbol. XOR is the only binary Boolean operation
that is commutative and whose truth table has equally many 0s and 1s.
NOTES Another example is x|y, the NAND gate in digital electronics, which is false
when both arguments are true and true otherwise. NAND can be defined by
composition of negation with conjunction because x |y = ¬(x y). It does not have
its own schematic symbol and is represented using an AND gate with an inverted
output. Unlike conjunction and disjunction, NAND is a binary operation that can be
used to obtain negation using the notation ¬x = x|x. Using negation one can define
conjunction in terms of NAND through x y = ¬(x|y) from which all other Boolean
operations of nonzero parity can be obtained. NOR, ¬(x y), is termed as the
evident dual of NAND and is equally used for this purpose. This universal character
of NAND and NOR has been widely used for gate arrays and also for integrated
circuits with multiple general-purpose gates.
In logical circuits, a simple adder can be made using an XOR gate to add the
numbers and a series of AND, OR and NOT gates to create the carry output. XOR
is also used for detecting an overflow in the result of a signed binary arithmetic
operation, which occurs when the leftmost retained bit of the result is not the same
as the infinite number of digits to the left.
2.3.1 Laws and Rules of Boolean Algebra
Boolean algebra is a system of mathematical logic. Properties of ordinary algebra
are valid for Boolean algebra. In Boolean algebra, every number is either 0 or 1.
There are no negative or fractional numbers. Though many of these laws have
already been discussed they provide the tools necessary for Boolean expressions.
The following are the basic laws of Boolean algebra:
Laws of Complementation
The term complement means to invert, to change 1s to 0s and 0s to 1s. The
following are the laws of complementation:
Law 1 0 1
Law 2 10
Law 3 A A
OR Laws AND Laws
Law 4 0+0=0 Law 12 0.0 = 0
Law 5 0+1=1 Law 13 1.0 = 0
Law 6 1+0=1 Law 14 0.1 = 0
Law 7 1+1=1 Law 15 1.1 = 1
Law 8 A+0=A Law 16 A.0 = 0
Law 9 A+1=1 Law 17 A.1 = A
Law 10 A+A=A Law 18 A.A = 0
Law 11 A A 1 Law 19 A. A  0
Self-Instructional
42 Material
Laws of ordinary algebra that are also valid for Boolean algebra are: Boolean Algebra and
Combinational Circuits
Commutative Laws
Law 20 A+B=B+A
Law 21 A.B= B.A NOTES
Associative Laws
Law 22 A  ( B  C )  ( A  B)  C
Law 23 A.( BC )  ( AB).C

Distributive Laws
Law 24 A.( B  C )  A.B  A.C
Law 25 A  BC  ( A  B).( A  C )
Law 26 A  ( A.B)  A  B

Example 2.1: Prove A + BC = (A + B) (A + C).


Solution: A + BC = A.1 + BC Law A . 1 =A
= A(1 + B) + BC Law A + 1 =1
= A . 1 + AB + BC Law A(B + C) = AB + AC
= A . (1 + C) + AB + BC Law 1 + A =1
= A . 1 + AC + AB + BC
= A . A + AC +AB + BC Law A . A = A
= A (A + C) + B (A + C)
 A + BC = (A + C) (A + B)
Alternative proof:
( A  C ) ( A  B)  AA  AB  AC  BC
 A  AB  AC  BC
 A (1  B)  AC  BC
 A .1  AC  BC
 A (1  C )  BC
 A  BC

Example 2.2: Prove A+ AB = A+ B.


Solution:
A  AB  A.1  AB Law A.1  1
 A(1  B)  AB Law 1+ A  1
 A.1  AB  AB Law A( B  C )  AB  AC
 A  B )( A  A) Law A .1  A
 A  B ).1 Law A  A  1
 A + AB = A + B Law A . 1  A

Self-Instructional
Material 43
Boolean Algebra and 2.3.2 De-Morgan’s Theorems
Combinational Circuits
A great mathematician, De Morgan contributed two of the most important theorems
of Boolean algebra. De Morgan’s theorems are extremely useful in simplifying an
expression in which the product of the sum of variables is complemented. The two
NOTES theorems are as follows:
1. Theorem 1: A  B  C......  A.B .C ......
2. Theorem 2: A.B.C......  A  B  C ......
The complement of an OR sum equals the AND product of the complements.
The complement of an AND product is equal to the OR sum of the complements.
These two theorems can be easily proved by checking each one for all values
of A, B, C, etc.
The complement of any Boolean expression may be found by means of these
theorems. In these rules, two steps are used to form a complement.
1. The + symbols are replaced with • symbols and • symbols with + symbols.
2. Each of the terms in the expression is complemented.
Implications of De Morgan’s Theorems
Consider Theorem I, A  B  A.B
The left-hand side of the equation can be viewed as the output of a NOR gate
whose inputs are A and B. The right-hand side of the equation is the result of first
inverting both A and B and then putting them through an AND gate. These two
representations are equivalent as shown in the Figure 2.12. Hence, an AND gate
with inverters on each of its inputs is equivalent to a NOR gate.

(a) (b)

Fig. 2.12 Logic Gates for Theorem A B A. B

Consider Theorem II, A.B  A  B


The left hand side of the equation can be implemented by a NAND gate with
inputs A and B. The right hand side can be implemented by first inverting inputs A
and B and then putting them through an OR gate. These two equivalent
representations are shown the following figure. The OR gate with inverters on each
of its inputs is equivalent to the NAND gate. When the OR gate with inverted
inputs is used to represent the NAND function, it is usually drawn as shown in
Figure 2.13 (b).

(a) (b)

Fig. 2.13 Logic Gates for Theorem A . B A B


De Morgan’s theorems can be proved for any number of variables and proof of
Self-Instructional
these two theorems for two-input variables is shown in Table 2.11.
44 Material
Table 2.11 De Morgan’s Theorems Boolean Algebra and
Combinational Circuits
A B A B A+B A.B A B AB AB A.B

0 0 1 1 0 0 1 1 1 1
NOTES
0 1 1 0 1 0 0 1 1 0
1 0 0 1 1 0 0 1 1 0
1 1 0 0 1 1 0 0 0 0

Consensus Theorem

In Boolean expression AB AC BC , the term BC is redundant. This redundant


term can be eliminated to form the equivalent Boolean expression AB AC . The
theorem used for the simplification of this type is known as consensus theorem.
Proof :

AB AC BC = AB AC

AB AC BC = AB AC ( A A) BC
= AB AC ABC ABC
= AB AC AB AC = AB AC
Dual of Consensus Theorem
It can be stated as
( A B )( A C )( B C ) = ( A B )( A C )

( AA AC AB BC )( B C ) = AA AC AB BC

( AC AB BC )( B C ) = AC AB BC

ABC ACC ABB ABC BCB BCC = AC AB BC


( A A) BC AC AB BC = AC AB BC
 AC AB BC = AC AB BC
Example 2.3: Apply De Morgan’s theorems to each of the following expressions:
(a) A + B + C (b) A + B + CD (c)  A + B  CD + E + F


Solution: (a) A  B  C  A  B . C  
 ( A  B).C  AC  BC

(b) A  B  CD  ( A  B ) . CD

  AB  . CD
 
 ( A.B ) . CD  A BCD Self-Instructional
Material 45
Boolean Algebra and
Combinational Circuits (c) ( A  B)CD  E  F  A  B(C  D)  E  F

 ( AC  AD  BC  B D  E  F )

NOTES  ( AC ) ( AD) ( BC ) ( B D) ( E ) ( F )
 ( A  C )( A  D)( B  C )( B  D)( E )( F )
 [ A A  AD  C A  CD ]( B  BD  C B  CD ) EF ( A A  A)
 [ A  AD  C A  CD ]( B  BD  C B  CD ) EF
 [ A(1  D)  C A  CD ][ B(1  D )  C B  CD ]EF
 [ A  C A  CD ][ B  C B  CD ]EF (1  A  1)
 [ A(1  C )  CD ][ B (1  C )  CD ]EF
 [ A  CD ][ B  CD ]EF
 [ A B  ACD  BCD  CD.CD ]EF
 [ AB  CD ( A  B)  CD ]EF
 [ AB  CD ( A  B  1)]E F
 [ A B  CD] E F

2.3.3 Simplification of Logic Expressions using Boolean Algebra


All Boolean expressions consist of various combinations of the basic operations of
OR, AND and NOT. Any expression can be implemented using these basic gates.
It is possible, however, for a designer to implement any logic expression using only
NAND or NOR gates. The realization of AND, OR and NOT functions using
NAND gates is shown in Figure 2.14.
Proof of Logical Correctness of the Combinations
We shall now prove the correctness of the connections of the gates to produce
NOT, AND, OR and NOR outputs, with reference to Figure 2.14.
NAND as a Universal Gate
AB
A
A Y = AB
Y=A B
(a) NOT Gate (b) AND Gate

A
A

Y=A+B

B
B
(c) OR Gate

Self-Instructional
46 Material
Boolean Algebra and
A
A Combinational Circuits

Y=A+B

NOTES
B
B
(d) NOR Gate

Fig. 2.14 NAND as a Universal Gate

1. NOT Gate Equivalent: Output and NAND gate = A . A = A .


2. AND Gate Equivalent:
Output of I NAND gate = AB ( A B)

Output of II NAND gate = AB = AB

(A B) . (A B) (A B) AB
3. OR Gate Equivalent:
Output of I NAND gate = A
Output of II NAND gate = B
Output of III NAND gate = A . B A B A B

NOR as a Universal Gate


NOR gates can also be arranged to implement any of the Boolean expressions. This is
illustrated in Figure 2.15.

A Y=A+B
A
Y=A B
(a) NOT Gate (b) OR Gate

A
A

Y = AB

B
B
(c) AND Gate

Self-Instructional
Material 47
Boolean Algebra and A
Combinational Circuits A

AB Y = AB

NOTES
B
B
(d) NAND Gate

Fig. 2.15 NOR as a Universal Gate

Example 2.4: Complement the expression A + BC.

Solution: A  BC  A.BC

 A. B C 
Example 2.5: Complement the expression AB + CD .

Solution: A B  CD  A B  CD   A  B  C  D 
Example 2.6: Using Boolean algebra simplify the following expression:
Y = ABC + ABC + ABC + ABC
Realize the simplified expression for the above equation using basic logic gates.
Solution: Y  AB C  A B C  A BC  A B C
 B C ( A  A )  A BC  A B C
 B C .1  A B C  A B C ( A  A  1)
 B C  A B C  A BC

 B (C  AC )  ABC
 B (C  A)  ABC ( C  AC  C  A)
 BC  AB  ABC
 BC  A( B  BC )
 BC  A( B  C )
 BC  AB  AC  AB  BC  CA

The logic circuit for the above simplified expression is given in


the following figure.

Self-Instructional
48 Material
Boolean Algebra and
Combinational Circuits

NOTES

Example 2.7: Expand the term S = A  B  C using Boolean theorems. Realize


the above expression using basic logic gates.
Solution: S  A B C
 ( A B  B A)  C
 
 A B  B A . C  C . ( A B  B A)

 A B .  B A . C  A BC  AB C
 

  A  B . B  A C  A BC  AB C


  
  
 A  B . B  A C  A BC  AB C
 ( AB  AA  B B  B A ).C  A BC  AB C
 ( AB  B A )C  A BC  AB C ( AA  0)
S  A B C  AB C  A BC  ABC
The logic circuit for the simplified expression is shown in the following figure.

Check Your Progress


7. What are the two De-Morgan's theorems?
8. What are the two steps to form a complement?
Self-Instructional
Material 49
Boolean Algebra and
Combinational Circuits 2.4 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS

NOTES 1. A logic gate is an electronic circuit, which makes logical decisions.


2. The AND gate is defined as an electronic circuit in which all the inputs must
be HIGH in order to have a HIGH output.
3. The NOT gate is also called a complementary circuit, because the circuit
accomplishes a logic negation.
4. A NAND gate is the cascade combination of AND and NOT gates.
5. The negation of the OR function is called NOT - OR or NOR. A NOR
gate is the cascade combination of NOT and OR gates.
6. The schematic symbol of the NOR gate is the OR symbol with a small
circle on the output.
7. The two theorems are as follows:
1. Theorem 1: A  B  C......  A.B .C ......
2. Theorem 2: A.B.C......  A  B  C ......
8. Two steps are used to form a complement.
1. The + symbols are replaced with • symbols and • symbols with +
symbols.
2. Each of the terms in the expression is complemented.

2.5 SUMMARY

 A logic gate is an electronic circuit, which makes logical decisions. To arrive


at these decisions, the most common logic gates used are OR, AND, NOT,
NAND and NOR gates. The NAND and NOR gates are called as the
universal gates.
 The AND gate is defined as an electronic circuit in which all the inputs must
be HIGH in order to have a HIGH output.
 The output of an OR gate assumes state 1 if one or more (all) inputs assume
state 1.
 A NAND gate is the cascade combination of AND and NOT gates. It is
just an AND gate followed by an inverter.
 The negation of the OR function is called NOT - OR or NOR. A NOR gate
is the cascade combination of NOT and OR gates.
 An Exclusive OR function obeys the definition that the output of a two-
input XOR assumes the logic state 1 if only one input assumes the logic
state 1.
Self-Instructional
50 Material
 The XNOR output is LOW when the inputs have an odd number of 1s. Boolean Algebra and
Combinational Circuits
 A great mathematician, De Morgan contributed two of the most important
theorems of Boolean algebra. De Morgan’s theorems are extremely useful
in simplifying an expression in which the product of the sum of variables is
NOTES
complemented.
 All Boolean expressions consist of various combinations of the basic
operations of OR, AND and NOT. Any expression can be implemented
using these basic gates. It is possible, however, for a designer to implement
any logic expression using only NAND or NOR gates.

2.6 KEY WORDS

 Logic gate: An electronic circuit that makes logic decisions.


 Gates: Blocks of hardware that produce signals of binary 1 or 0 when
logic input requirements are satisfied. Each gate has a distinct graphic symbol
and its operation can be described by means of an algebraic function.
 Truth table: A compact way of representing the statements that define the
values of dependent variables.
 AND gate: An electronic circuit in which all the inputs must be HIGH in
order to have HIGH output.
 OR gate: It is a digital logic gate that implements logical disjunction - it
behaves according to the truth table to the right.
 NOT gate: A gate that performs the mathematical operation of taking the
complement.

2.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What are universal gates?
2. Name the three basic logic operations.
3. Write the logic symbol of NAND gate and its truth table.
4. Write the logic symbol of an XOR gate and its Boolean expression.
5. Write the logic diagram of an XOR gate using basic gates.
6. Write the symbol of an XNOR gate and its Boolean expression.

Self-Instructional
Material 51
Boolean Algebra and Long Answer Questions
Combinational Circuits
1. Write the logic diagram of XNOR gate using basic gates.
2. Show how a 2-input XOR gate can be constructed from the 2-input NAND
NOTES gates.
3. Explain De-Morgan’s theorem with the help of an example.
4. Discuss the laws and rules of Boolean algebra.

2.8 FURTHER READINGS

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.

Self-Instructional
52 Material
Simplification of

UNIT 3 SIMPLIFICATION OF Expressions

EXPRESSIONS
NOTES
Structure
3.0 Introduction
3.1 Objectives
3.2 SOP and POS Expressions
3.2.1 Minterm
3.2.2 Maxterm
3.2.3 Deriving Sum of Product (SOP) Expression
3.2.4 Deriving Product of Sum (POS) Expression from a Truth Table
3.3 Karnaugh Map (K-map)
3.3.1 K-Map Simplification for Two Variables Using SOP Form
3.3.2 K-Map with Three Variables Using SOP Form
3.3.3 K-Map Simplification for Four Variables Using SOP Form
3.3.4 Five-Variable K-Map
3.3.5 K-Map Using POS Form
3.4 Quine–McCluskey Method
3.5 Two Level Implementation of Combinational Circuits
3.5.1 Types of Combinational Circuits
3.5.2 Implementation of Combinational Circuits
3.6 Answers to Check Your Progress Questions
3.7 Summary
3.8 Key Words
3.9 Self Assessment Questions and Exercises
3.10 Further Readings

3.0 INTRODUCTION

In this unit, you will learn about the SOP and POS expressions and Karnaugh map
minimizations. DeMorgan’s theorem states that the inversion bar of an expression
may be broken at any point and the operation at that point replaced by its opposite,
i.e., AND replaced by OR and vice versa. Karnaugh maps provide a pictorial
method of grouping together expressions with common factors and therefore
eliminating unwanted variables. The Karnaugh map can also be described as a
special arrangement of a truth table. The input-output relationship of the binary
variables for each gate can be represented in tabular form in a truth table. A truth
table is a compact way of representing the statements that define the values of
dependent variables. However, it is often far more convenient to use mathematical
descriptions for binary variables.

Self-Instructional
Material 53
Simplification of
Expressions 3.1 OBJECTIVES

After going through this unit, you will be able to:


NOTES  Explain how SOP and POS expressions work
 Analyse the importance of a Karnaugh map
 Understand the quine Mc-Cluskey method
 Discuss the implementation of combinational circuits

3.2 SOP AND POS EXPRESSIONS

Logical functions are generally expressed in terms of logical variables. Values taken
on by the logical functions and logical variables are in the binary form. An arbitrary
logic function can be expressed in the following forms:
1. Sum of Products (SOP)
2. Product of Sums (POS)
Product term: The AND function is referred to as a product. In Boolean algebra,
the word “product” loses its original meaning but serves to indicate an AND function.
The logical product of several variables on which a function depends is considered
to be a product term. The variables in a product term can appear either in
complemented or uncomplemented form. A B C , for example, is a product term.
Sum term: An OR function (+ sign) is generally used to refer a sum. The logical
sum of several variables on which a function depends is considered to be a sum
term. Variables in a sum term can appear either in complemented or
uncomplemented form. A B C , for example, is a sum term.
Sum Of Products (SOP): The logical sum of two or more logical product terms,
is called a Sum of Products expression. It is basically an OR operation of AND
operated variables such as :
1. Y AB BC AC

2. Y AB A C BC
Product Of Sums (POS): A product of sums expression is a logical product of
two or more logical sum terms. It is basically an AND operation of OR operated
variables such as:
1. Y (A B )(B C )(C A )
2. Y (A B C )(A C )

3.2.1 Minterm
A product term containing all the K variables of the function in either complemented
or uncomplemented form is called a Minterm. A 2-variable function has four
Self-Instructional
54 Material
possible combinations, viz. AB, AB, AB, and AB. These product terms are called Simplification of
Expressions
minterms or standard products or fundamental products. For a 3-binary input
variable function, there are 8 minterms as shown in Table 3.1. Each minterm can
be obtained by the AND operation of all the variables of the function. In the
minterm, a variable appears either in uncomplemented form, if it possesses a value NOTES
of 1 in the corresponding combination, or in complemented form, if it contains the
value 0. The minterms of a 3-variable function can be represented by
m 0 , m1 , m 2 , m 3 , m 4 , m 5 , m 6 and m7 ; the suffix indicates the decimal code
corresponding to the minterm combination.
Table 3.1 The Minterm Table
A B C Minterm
0 0 0 ABC
0 0 1 ABC
0 1 0 A BC
0 1 1 A BC
1 0 0 AB C
1 0 1 AB C
1 1 0 ABC
1 1 1 ABC
The main property of a minterm is that it possesses the value 1 for only one
combination of K input variables; i.e., for a K variable function of the 2K minterms,
only one minterm will have the value 1, while the remaining 2K–1 minterms will
possess the value 0 for an arbitrary input combination. For example, as shown in
Table 3.1, for input combination 010, i.e., for A = 0, B = 1 and C = 0, only the
minterm A BC will have the value 1, while the remaining seven minterms will have
the value 0.
Canonical Sum of Product Expression: It is defined as the logical sum of all the
minterms derived from the rows of a truth table, for which the value of the function
is 1. It is also called a minterm canonical form. The canonical sum of product
expression can be given in a compact form by listing the decimal codes in
correspondence with the minterm containing a function value of 1. For example, if
the canonical sum of product form of a 3-variable logic function Y has three minterms
and ABC , this can be expressed as the sum of the decimal codes
A BC , AB C
corresponding to these minterms as stated below:
Y   m (0,5,6 )
 m0  m5  m 6
 A BC  ABC  ABC
where, m (0,5,6 ) represents the summation of minterms corresponding to
the decimal codes 0, 5 and 6.
Using the following procedure, the canonical sum of product form of a logic
function can be obtained:
Self-Instructional
Material 55
Simplification of 1. Examine each term in the given logic function. Retain it if it is a minterm;
Expressions
continue to examine the next term in the same manner.
2. Check for variables that are missing in each product which is not a
NOTES minterm. Multiply the product by ( X  X ), for each variable X that is
missing.
3. Multiply all the products and omit the redundant terms.
The above procedures can be explained with the following examples.
Example 3.1: Obtain the canonical sum of product form of the function
Y (A, B) = A + B
Solution: The given function containing the two variables A and B has the variable
B missing in the first term and the variable A missing in the second. Therefore, the
first term has to be multiplied by (B B ), the second term by ( A  A) as given
below:
A  B  A 1  B 1
 A  (B  B )  B  ( A  A)
 AB  AB  BA  BA
 AB  AB  A B ( AB  AB  AB )
Y ( A, B)  A  B  AB  AB  A B

3.2.2 Maxterm
A sum term containing all the K variables of the function in either complemented or
uncomplemented form is called a Maxterm. A 2-variable function has four possible
combinations, viz. A B , A B , A B and A B . These sum terms are called
maxterms. So also, a 3-binary input variable function has 8 maxterms as shown in
Table 3.2. Each maxterm can be obtained by the OR operation of all the variables
of the function. In a maxterm, a variable appears either in uncomplemented form if
it possesses the value 0 in the corresponding combination or in complemented
form if it contains the value 1. The maxterms of a 3-variable function can be
represented by M0, M1, M2, M3, M4, M5, M6 and M7; the suffix indicates the
decimal code corresponding to the maxterm combination.
Table 3.2 The Maxterm Table
A B C Maxterm
0 0 0 A B C
0 0 1 A B C
0 1 0 A B C
0 1 1 A B C
1 0 0 A B C
1 0 1 A B C
1 1 0 A B C
1 1 1 A B C

Self-Instructional
56 Material
The most important property of a maxterm is that it possesses the value 0 Simplification of
Expressions
for only one combination of K input variables; i.e., for a K variable function of the
2K maxterms, only one maxterm will have the value 0, while all the remaining 2K–
1 maxterms will have the value 1 for an arbitrary input combination. For example,
for input combination 101, i.e., for A = 1, B = 0 and C = 1, only the maxterm NOTES
(A B C ) will have the value 0, while the remaining seven maxterms will have the
value 1. This can be studied in Table 3.2.
From Tables 3.1 and 3.2, it is found that each maxterm is the complement
of the corresponding minterm. For example, if the maxterm is (A B C ), then its
complement (i.e., A B C ) A BC is its corresponding minterm.
Canonical product of sum expression: This is defined as the logical product of
all the maxterms derived from the rows of truth table, for which the value of function
is 0. It is also known as the maxterm canonical form. The canonical product of
sum expression can be given in a compact form by listing the decimal codes
corresponding to the maxterms containing a function value of 0. For example, if
the canonical product of sum form of a 3-variable logic function Y has four maxterms
(A B C ), (A B C ), (A B C ) and (A B C ), then it can be expressed as
the product of decimal codes as given below:
Y   (0, 2, 4, 7 )
 M 0 . M 2 . M 4 . M7
 (A  B  C )(A  B  C )(A  B  C )(A  B  C )
The following procedure can be used to obtain the canonical product of the
sum form of a logic function:
1. Examine each term in the given logic function. Retain it if it is a maxterm;
continue to examine the next term in the same manner.
2. Check for variables that are missing in each sum, which is not a maxterm.
Add (XX ) to the sum term, for each variable X that is missing.
3. Expand the expression using the distributive property and eliminate the
redundant terms.
The above procedures can be explained with the following examples.
Example 3.2: Express the function Y A B C in (a) canonical SOP and
(b) canonical POS form.
Solution: (a) Canonical sum of products form
Y A BC
A (B B )(C C ) B C (A A)
(AB AB )(C C ) AB C ABC
ABC ABC AB C AB C AB C ABC
ABC ABC AB C AB C ABC [ AB C AB C AB C ]
Y m 7 m 6 m 5 m 4 m1
Self-Instructional
Material 57
Simplification of
Expressions
Therefore, Y (1, 4 , 5, 6 , 7 )
(b) Canonical product of sum form
Y A BC
NOTES (A B )(A C ) [ A B .C (A B )(A C )]
(A B CC )(A C BB )
(A B C )(A B C )(A B C )(A B C )[ (A B )(A B ) A]
Y (A B C )(A B C )(A B C )
Y M 2 M 3 M 0 or Y M 0 M 2 M 3
Therefore, Y (0, 2, 3)

3.2.3 Deriving Sum of Product (SOP) Expression


The Sum of Product (SOP) expression for a Boolean function can be derived
from its truth table by summing (OR operation) the product terms that correspond
to the combinations containing a function value 1. In the product term, the input
variable appears either in uncomplemented form if it possesses the value 1, or in
complemented form if it contains the value 0.
Now, consider the truth table, shown in Table 3.3, for a 3-input function Y.
Here, the Y value is 1 for the input combinations 010, 011, 101 and 111 and their
corresponding product terms are A BC , ABC , ABC and ABC respectively..
Table 3.3 Truth Table
Inputs Output Product Terms Sum Terms
A B C Y
0 0 0 0 (A + B + C )
0 0 1 0 (A + B + C )
0 1 0 1 ABC
0 1 1 1 ABC
1 0 0 0 ( A  B  C)
1 0 1 1 ABC
1 1 0 0 ( A  B  C)
1 1 1 1 ABC

Now, the final SOP expression for the output Y is obtained by summing
(OR operation of) the four product terms as follows:
Y A BC A BC ABC ABC
The procedure for obtaining the output expression in SOP form from a truth
table can be summarised, in general, as follows:
1. Give a product term for each input combination in the table, containing
an output value of 1.
2. Each product term contains its input variables in either complemented
or uncomplemented form. If an input variable is 0, it appears in
complemented form; if the input variable is 1, it appears in
uncomplemented form.
Self-Instructional
58 Material
3. All the product terms are OR operated together in order to produce the Simplification of
Expressions
final SOP expression of the output.
3.2.4 Deriving Product of Sum (POS) Expression from a Truth Table
The Product of Sum (POS) expression for a Boolean (switching) function can NOTES
also be obtained from a truth table by the AND operation of the sum terms
corresponding to the combinations for which the function assumes the value 0. In
the sum term, the input variable appears in an uncomplemented form if it has the
value 0 in the corresponding combination and in the complemented form if it has
the value 1.
Studying the truth table shown in Table 3.3, for a 3-input function Y, we find
that the Y value is 0 for the input combinations 000, 001, 100 and 110 and that
their corresponding sum terms are ( A  B  C ),( A  B  C ),( A  B  C ) and ( A  B  C )
respectively.
Now the final POS expression for the output Y is obtained by the AND
operation of the four sum terms as follows:
Y  ( A  B  C )( A  B  C )( A  B  C )( A  B  C )

The procedure for obtaining the output expression in POS form from a
truth table can be summarised, in general, as follows:
1. Give a sum term for each input combination in the table, which has an
output value of 0.
2. Each sum term contains all its input variables in complemented or
uncomplemented form. If the input variable is 0, then it appears in an
uncomplemented form; if the input variable is 1, it appears in the
complemented form.
3. All the sum terms are AND operated together to obtain the final POS
expression of the output.
The POS expression for a Boolean (switching) function can also be obtained
from its SOP expression using Y  Y as given in the following example.
Consider a function,
Y  A BC  A BC  AB C  ABC

Y  Y  A BC  A BC  AB C  ABC
The complement Y can be obtained by the OR operation of the minterms
which are not available in Y. Therefore,
Y  A B C  A B C  AB C  ABC
Y  A B C  A B C  AB C  ABC
 ( A B C )( A B C )( AB C )( ABC )
 ( A  B  C )( A  B  C )( A  B  C )( A  B  C )
Self-Instructional
Material 59
Simplification of
Expressions
Check Your Progress
1. Define SOP and POS.
NOTES 2. Define minterm.
3. What is maxterm?

3.3 KARNAUGH MAP (K-MAP)

Maurice Karnaugh, the telecommunications engineer invented a graphical way of


visualizing and then simplifying Boolean expressions. This graphical representation is
now known as a Karnaugh map or K-map. It is used to solve problem with (do not
care) bit. Do not care in SOP using form = 1 and without using do not care = 0.
Karnaugh maps provide a systematic method to obtain simplified SOPs
Boolean expressions. This is a compact way of representing a truth table and is a
technique that is used to simplify logic expressions. It is ideally suited for four or
less variables. It becomes cumbersome for five or more variables. Each square
represents either a minterm or maxterm. A K-map of n variables will have two
squares. For a Boolean expression, product terms are denoted by 1’s, whereas
sum terms are denoted by 0’s—but 0’s are often left blank.
K-Map Description and Terminology
A K-map is a matrix consisting of rows and columns that represent the output
values of a Boolean function. K-map can be of two forms: SOP form and POS
form.
K-Map for SOP Form
The output values placed in each cell of the matrix are derived from the ‘minterms’
of a Boolean function. A minterm is a product term that contains all of the
function’s variables exactly once, either complemented or not complemented.
Minterm Example With Two Variables: The minterms for a function
having the inputs x and y are as follows (Refer Table 3.4):
XY , XY , XY ,and XY

Table 3.4 Truth Table for Two Variables Minterm

Minterm X Y
XY 0 0
XY 0 1
XY 1 0
XY 1 1

Self-Instructional
60 Material
If variable input is 1, then it is written as it is, else the complement of that Simplification of
Expressions
variable is written.
Minterm Example With Three Variables: Similarly, a function having
three inputs has the minterms that are shown Table 3.5.
NOTES
Table 3.5 Truth Table for Three Variables Minterm

Minterm X Y Z
XY Z 0 0 0
XY Z 0 0 1
XY Z 0 1 0
XYZ 0 1 1
XY Z 1 0 0
XY Z 1 0 1
XY Z 1 1 0
XYZ 1 1 1

K-Map Cell Using SOP Form


A K-map consists of a grid of squares, each square representing one canonical
minterm combination of the variables or their inverse. The map is arranged so that
squares representing minterms which differ by only one variable are adjacent both
vertically and horizontally. Therefore, XY Z would be adjacent to XY Z and
would also adjacent to XY Z and XY Z . For example, a K-map has a cell for
each minterm as shown in Table 3.6 and Figure 3.1.
Table 3.6 Truth Table for the Given Expression

F(X, Y) = XY
X Y XY
0 0 0
0 1 0
1 0 0
1 1 1

This means that it has a cell for each line for the truth table of a function.

Fig. 3.1 K-Map for Function F

Self-Instructional
Material 61
Simplification of For example, the truth table for the function, F(x,y) = x + y is given ref table
Expressions
3.7.
Table 3.7 Truth Table for the Given Expression
F(X, Y) = X + Y
NOTES
X Y X Y
0 0 0
0 1 1
1 0 1
1 1 1

This function is equivalent to the OR of all of the minterms that have a value
of 1 (SOP form). Thus,
F(X,Y) X Y XY XY XY (1)
Table 3.7 is mapped into K-map in Figure 3.2.

Fig. 3.2 K-Map for Function F

Minimization Technique

Based on the Unifying Theorem, X X 1 , the expression to be minimized should


generally be in SOP form (if necessary, the conversion process is applied to create
the SOP form). The function is mapped onto the K-map by marking 1 in those
squares corresponding to the terms in the expression to be simplified (the other
squares may be filled with 0’s).
Pairs of 1’s on the map which are adjacent are combined using the theorem
Y ( X X ) Y , where Y is any Boolean expression. (If two pairs are also adjacent,
then these can also be combined using the same theorem.)
The minimization procedure consists of recognizing multiple pairs as follows:
 These are circled indicating reduced terms.
 Groups which can be circled are those which have two (21) 1’s, four
(22) 1’s, eight (23) 1’s and so on.
 Since the squares on one edge of the map are considered adjacent to
those on the opposite edge, group can be formed with these squares.
 Groups are allowed to overlap.
The objective is to cover all the 1’s on the map in the fewest number of
groups and to create the largest groups to do this. Once all possible groups have
Self-Instructional
been formed, the corresponding terms are identified as follows:
62 Material
 A group of two 1’s eliminates one variable from the original minterm. Simplification of
Expressions
 A group of four 1’s eliminates two variables from the original minterm.
 A group of eight 1’s eliminates three variables from the original minterm
and so on. NOTES
 The variables eliminated are those which are different in the original
minterms of the group.
Figure 3.3 represents a couple of examples of invalid groups.

Fig. 3.3 Examples of Invalid Groups

3.3.1 K-Map Simplification for Two Variables Using SOP Form


However, we can reduce the complicated expressions to its simplest terms by
finding adjacent in the K-map as shown in Figure 3.4 that can be collected into
groups that are powers of two.

Fig. 3.4 K-Map for Two Variables

Here, there are two such groups:


1. In the vertical group, it does not matter what value x has, hence the
group is only dependent on variable y.
2. Similarly, in the horizontal group it does not matter what value y has.
The group is only dependent on variable x. Refer Figure 3.5.

Fig. 3.5 K-Map Showing Groups for Figure 3.4


Hence, the Boolean function reduces to x + y.
Self-Instructional
Material 63
Simplification of Rules for K-Map Simplification Using SOP Form
Expressions
The rules of K-map simplification are as follows:
1. Groupings can contain only 1’s; no 0’s.
NOTES 2. Groups can be formed only at right angles; diagonal groups are not allowed.
3. The number of 1s in a group must be a power of 2—even if it contains a
single 1.
4. The groups must be made as large as possible.
5. Groups can overlap and wrap around the sides of the K-map.
Example 3.3
Reduce the given function using K-map f XY XY
Solution:
In this example, we have the equation as input, and we have one output function.
Draw the k-map for function f with marking 1 for XY and XY position. Now
combine two 1’s as shown in the figure below to form the single term. As we can
see X and X get cancelled and only Y remains.

SOP form f = Y.

Example 3.4: Reduce the given function using K-map f XY XY XY.

Solution:
In this example, we have the equation as input, and we have one output function.
Draw the K-map for function f with marking 1 for X Y , X Y and XY position.
Now combine two 1’s as shown in the figure below to form the two single terms.

SOP form f = X + Y
Self-Instructional
64 Material
3.3.2 K-Map with Three Variables Using SOP Form Simplification of
Expressions
K-map for three variables is constructed as shown in Figure 3.6.

NOTES

Fig. 3.6 K-Map for Three Variables

We have placed each minterm in the cell that will hold its value. Please note
that the values for the yz combination at the top of the matrix form a pattern that is
not a normal binary sequence. A K-map must be ordered so that each minterm
differs only in one variable from each neighbouring cell, hence 11 appears before
10 Rule. This helps in simplification.
The first row of the K-map contains all minterms where x has a value of
zero. The first column contains all minterms where y and z both have a value of
zero. Consider the function:
F ( X,Y, Z ) = X Y Z XYZ XY Z XYZ

Its K-map is given in Figure 3.7.

Fig. 3.7 K-Map for the Given Function

This grouping tells us that the changes in the variables x and y have no
influence upon the value of the function. They are irrelevant. Refer Figure 3.8.

Fig. 3.8 K-Map Showing Groups for Figure 2.15

This means that the function reduces to F(X,Y,Z) = Z.


Example 3.5: Reduce the function using k-map:
F ( X ,Y , Z ) XY Z XY Z XY Z

XY Z XY Z
Solution:
Its K-map is shown below in the figure. There are (only) two groupings of 1s.

Self-Instructional
Material 65
Simplification of
Expressions

NOTES In this K-map, we see an example of a group that wraps around the sides
of a K-map. This group tells us that the values of x and y are not relevant to the
term of the function that is encompassed by the group.

The group in the top row tells us that only the value of x is significant in that
group.

We see input value of x is 0, i.e. minterm is complemented in that row, so


the other term of the reduced function is as follows:
The reduced function is as follows:
F ( X ,Y , Z ) X Z
Another Form of 3-Variable K-Map There are eight minterms for three
variables (X, Y and Z). Therefore, there are eight cells in a 3-variable K-map. One
important thing to note is that K-maps follow the Gray code sequence and not the
binary one.
Using Gray code arrangement ensures that minterms of adjacent cells differ
by only ONE literal. (Other arrangements which satisfy this criterion may also be
used.) Refer Figure 3.9.

Fig. 3.9 Three-Variable K-Map Using Minterm

Each cell in a 3-variable K-map has three adjacent neighbours. In general,


each cell in an n-variable K-map has n adjacent neighbours. Refer Figure 3.10.
Self-Instructional
66 Material
Simplification of
Expressions

NOTES

Fig. 3.10 Presentation for Three-Variable in K-Map

There is wrap-around in the K-map. Refer Figure 3.11.


1. XY Z (m0) is adjacent to XY Z (m2)
2. XY Z (m4) is adjacent to XY Z (m6)

Fig. 3.11 Presentation of a wrap-Around for three-variable in K-map

3.3.3 K-Map Simplification for Four Variables Using SOP Form


The model can be extended to accommodate the 16 minterms that are produced
by a 4-input function. This is the format for a 16-minterm K-map. Refer Figure
3.12.

Fig. 3.12 K-Map for Four Variables

Example 3.6: Simplify the given expression:


F (W , X ,Y , Z ) = W X Y Z W XY Z W XY Z W XY Z W XY Z
W XY Z W XY Z
Solution:
We have populated the K-map shown below with the non-zero minterms from the
function:

Self-Instructional
Material 67
Simplification of
Expressions

NOTES

The three groups consist of the following:


1. A pair entirely within the K-map at the right.
2. A quad group that wraps the top and bottom.
3. A quad that spans the corners.
Thus, we have three terms in our final function:
F (W , X ,Y , Z ) XY XZ XY Z
Example 3.7 Simplify the given expression:
f(W,X,Y,Z) = (4, 5, 10, 11, 14, 15)

f W XY WY

Choosing K-Map Groups


It is possible to have a choice as to how to pick groups within a K-map, while
keeping the groups as large as possible. The (different) functions that result from
the groupings as shown in Figure 3.13 (a) and (b) are logically equivalent.
Self-Instructional
68 Material
Simplification of
Expressions

NOTES

Fig. 3.13 Different Groupings for Same K-Map

Don’t Care Conditions


Real circuits do not always need to have an output defined for every possible
input. For example, some calculator displays consist of 7-segment LEDs. These
LEDs can display 27 - 1 patterns, but only 10 of them are useful. Refer Figure
3.14.

Fig. 3.14 7-Segment Display LED

If a circuit is designed so that a particular set of inputs can never happen, we


call this set of inputs a don’t care condition. They are helpful to us in K-map
circuit simplification.
Sometimes, we come across cases in which certain input combinations
never occur. For example, in BCD system 1010 to 1111 are never used and are
known as don’t care terms. We don’t care what the function output is to be because
they are guaranteed never to occur. These don’t care conditions can be used on a
map to provide further simplification of the function. To distinguish the don’t care
from 1’s and 0’s, an X will be used to represent.
For example, in a K-map, a don’t care condition is identified by an X in the
cell of the minterm(s) for the don’t care inputs as shown in Figure 3.15.

Self-Instructional
Material 69
Simplification of
Expressions

NOTES

Fig. 3.15 Four-Varaiable K-Map Where X Represents Don’t Care Conditions

In performing the simplification, we are free to include or ignore the X’s


when creating our groups. In one grouping in the K-map shown in Figure 3.16,
we have the function:

Fig. 3.16 Grouping for Figure 3.15

F (W , X ,Y , Z ) W X YZ
A different grouping as shown in Figure 3.17 gives us the function:

Fig. 3.17 A Different Grouping for Figure 3.16

F (W , X ,Y , Z ) W Z YZ

The truth table of F (W , X ,Y , Z ) W Z YZ is different from the truth table


of F (W , X ,Y , Z ) W X YZ . However, the values for which they differ are the
inputs for which we have don’t care conditions.
3.3.4 Five-Variable K-Map
There are 32 cells in a five variables (V, W, X, Y, Z) K-map as shown in Figure
3.18.

Self-Instructional
70 Material
Simplification of
Expressions

NOTES

Fig. 3.18 Five-Variable K-Map

Recapping Rules of K-Map Simplification Using SOP Form:


1. Groupings can contain only 1s; no 0s.
2. Groups can be formed only at right angles; diagonal groups are not allowed.
3. The number of 1s in a group must be a power of 2—even if it contains a
single 1.
4. The groups must be made as large as possible.
5. Groups can overlap and wrap around the sides of the K-map.
6. Use don’t care conditions whenever we can group make large.
3.3.5 K-Map Using POS Form
The output values placed in each cell are derived from the ‘maxterm’ of a Boolean
function. A maxterm is a sum term that contains all of the function’s variables
exactly once, either complemented or not complemented.
Inverse Function
Following points need to be considered:
1. The 0’s on a K-map indicate when the function is 0.
2. We can minimize the inverse function by grouping the 0’s (and any suitable
don’t cares) instead of the 1’s.

Self-Instructional
Material 71
Simplification of 3. This technique leads to an expression which is not logically equivalent to
Expressions
that obtained by grouping the 1’s (i.e. the inverse of).
4. Minimizing for the inverse function may be particularly advantageous if there
are many more 0’s than 1’s on the map.
NOTES
5. We can also apply De Morgan’s theorem to obtain a POS expression. For
example, consider Table 3.8.
Table 3.8 Two-Variable Maxterm Representation

X Y Maxterm

0 0 XY

0 1 X Y ’

1 0 X’  Y

1 1 X’  Y ’

If variable input is 0, then it is written as it is, else the complement of that


variable is written.
Recapping Rules of K-Map Simplification Using POS Form
1. Groupings can contain only 0s; no 1s.
2. Groups can be formed only at right angles; diagonal groups are not allowed.
3. The number of 0s in a group must be a power of 2—even if it contains a
single 0.
4. The groups must be made as large as possible.
5. Groups can overlap and wrap around the sides of the K-map.
6. Use don’t care conditions whenever you can make a larger group.
Example 3.8: Simplify the given function f (A, B) = m (0, 2, 3)using K-map.
Solution:

f ( A )( B ) POS form

Self-Instructional
72 Material
Example 3.9: Simplify the given function f (A, B) = m (1, 2, 3) using K-map. Simplification of
Expressions
Solution:

NOTES

f ( A )( B )

3.4 QUINE–McCLUSKEY METHOD

Consider the following points:


1. The expression is represented in the canonical SOP form if not already in
that form.
2. The function is converted into numeric notation.
3. The numbers are converted into binary form.
4. The minterms are arranged in a column divided into groups. Begin with the
minimization procedure.
 Each minterm of one group is compared with each minterm in the group
immediately below.
 Each time a number is found in one group which is the same as a number
in the group below except for one digit, the numbers pair is ticked and a
new composite is created.
 This composite number has the same number of digits as the numbers in
the pair except the digit different which is replaced by an ‘x.’
5. The above procedure is repeated on the second column to generate a third
column.
6. The next step is to identify the essential prime implicates, which can be
done using a prime implicant chart.
 Where a prime implicant covers a minterm, the intersection of the
corresponding row and column is marked with a cross.
 Those columns with only one cross identify the essential prime implicates.
 These prime implicates must be in the final answer.
 The single crosses on a column are circled and all the crosses on the
same row are also circled, indicating that these crosses are covered by
the prime implicates selected.
 Once one cross on a column is circled, all the crosses on that column
can be circled since the minterm is now covered.
Self-Instructional
Material 73
Simplification of  If any non-essential prime implicant has all its crosses circled, the prime
Expressions
implicant is redundant and need not be considered further.
 Next, a selection must be made from the remaining non-essential prime
implicates, by considering how the non-circled crosses can be covered
NOTES
best.
 One generally would take those prime implicates which cover the greatest
number of crosses on their row.
 If all the crosses in one row also occur on another row which includes
further crosses, then the latter is said to dominate the former and can be
selected.
 The dominated prime implicant can then be deleted.
Example 3.10: Find the minimal SOPs for the Boolean expression, f = (1, 2, 3,
7, 8, 9, 10, 11, 14, 15), using Quine–McCluskey method.
Solution: Firstly, these minterms are represented in the binary form as shown in
Table 3.9. The above binary representations are grouped into a number of sections
in terms of the number of 1’s as shown in Table 3.10.
Table 3.9 Binary Representation of Minterms

Minterms U V W X
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
14 1 1 1 0
15 1 1 1 1

Table 3.10 Group of Minterms for Different Number of 1’s

No of 1’s Minterms U V W X
1 1 0 0 0 1
1 2 0 0 1 0
1 8 1 0 0 0
2 3 0 0 1 1
2 9 1 0 0 1
2 10 1 0 1 0
3 7 0 1 1 1
3 11 1 0 1 1
3 14 1 1 1 0
4 15 1 1 1 1

Self-Instructional
74 Material
Any two numbers in these groups which differ from each other by only Simplification of
Expressions
one variable can be chosen and combined, to get 2-cell combination, as shown in
Table 3.11.
Table 3.11 Two-Cell Combinations NOTES
Combination U V W X
(1,3) 0 0 - 1
(1,9) - 0 0 1
(2,3) 0 0 1 -
(2,10) - 0 1 0
(8,9) 1 0 0 -
(8,10) 1 0 - 0
(3,7) 0 - 1 1
(3,11) - 0 1 1
(9,11) 1 0 - 1
(10,11) 1 0 1 -
(10,14) 1 - 1 0
(7,15) - 1 1 1
(11,15) 1 - 1 1
(14,16) 1 1 1 -

From the 2-cell combinations, one variable and dash in the same position
can be combined to form 4-cell combinations as shown in Table 3.12.
Table 3.12 Four-Cell Combinations

Combination U V W X
(1,3,9,11) - 0 - 1
(2,3,10,11) - 0 1 -
(8,9,10,11) 1 0 - -
(3,7,11,15) - - 1 1
(10,11,14,15) 1 - 1 -

The cells (1, 3) and (9, 11) form the same 4-cell combination as the cells (1,
9) and (3, 11). The order in which the cells are placed in a combination does not
have any effect. Thus, the (1, 3, 9, 11) combination could be written as (1, 9, 3,
11).
From above 4-cell combination table, the prime implicates table can be
plotted as shown in Table 3.13.

Self-Instructional
Material 75
Simplification of Table 3.13 Prime Implicates Table
Expressions
Prime 1 2 3 7 8 9 10 11 14 15
Implicates
NOTES (1,3,9,11) X - X - - X - X - -
(2,3,10,11) - X X - - - X X - -
(8,9,10,11) - - - - X X X X - -
(3,7,11,15) - - X X - - - X - X
(10,11,14,15) - - - - - - X X X X
Result X X - X X - - - X -

The columns having only one cross mark corresponds to the essential prime
implicants. The prime implicants sum gives the function in its minimal SOP form.

3.5 TWO LEVEL IMPLEMENTATION OF


COMBINATIONAL CIRCUITS

Following points need consideration:


1. It always gives the same output for a given set of inputs.
2. It does not store any information (it is memory less).
3. The examples include adder, decoder, multiplexer (mux), shifter etc., which
combine to form larger units such as ALU.
Specifications of Combinational Logic Circuit
Boolean Algebra: This forms the algebraic expression showing the operation of
the logic circuit for each input variable either as True or False that results in logic 1
output.
Boolean expression
Q ( AB )( A B )C

Truth Table: A truth table defines the function of a logic gate by providing a
concise list that shows all the output states in tabular form for each possible
combination of input variable that the gate can encounter (Refer Table 3.14).
Table 3.14 Typical Truth Table

C B A Q
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0
1 0 1 1
1 0 1 0
1 1 0 0
1 1 1 0
Self-Instructional
76 Material
Logic Diagram: This is a graphical representation of a logic circuit that shows the Simplification of
Expressions
wiring and connections of each individual logic gate represented by a specific
graphical symbol that implements the logic circuit (Refer Figure 3.19).

NOTES

Fig. 3.19 Logic Diagram for Boolean Function

Combinational logic circuits are made up from individual logic gates only.
They can also be considered as decision-making circuits. Combinational logic is
about combining logic gates together to process two or more signals in order to
produce at least one output signal according to the logical function of each logic
gate.
Common combinational circuits made up from individual logic gates that
carry out desired applications include multiplexers, demultiplexers, encoders,
decoders, full adder (FAs) and half adders (HAs), etc.

Fig. 3.20 Classification of Combinational Logic Circuit

3.5.1 Types of Combinational Circuits


The different types of combinational circuits have been depicted in the classification
chart shown in Figure 3.20. Some relevant combinational circuits are mentioned
as follows:
1. Adders: The subtraction is typically via 2’s complement addition.
2. Multiplexers: In multiplexers, N control signals select 2N input lines to 1
output.
3. Demultiplexers: In demultiplexers, N controls the signal, select 1 input to
any 2N output lines.
4. Decoders: In this case, the N inputs produce M outputs (typically M > N).
5. Encoders: In this case, N inputs produce M outputs (typically N > M).
Self-Instructional
Material 77
Simplification of
Expressions
6. Converter (same as decoder or encoder): Here, N inputs produce M
outputs (typically, N = M).
7. Comparators: It compares two N-bit binary values.
NOTES 8. Equal-To or Not-Equal-To: This is the easiest to design.
9. Greater-Than, Less-Than, Greater-Than-Or-Equal-To, Etc.: It require
adders.
10. Parity Check/Generate Circuit: It calculates the even or odd parity over
N bits of data. This checks for good/bad parity (parity errors) in the incoming
data.
3.5.2 Implementation of Combinational Circuits
The steps involved in the designing of a combinational logic circuit are as follows:
1. Writing the statement of the problem.
2. Identification of input and output variables.
3. Expressing the relationship between the input and output variables.
4. Construction of a truth table to meet input–output requirements.
5. Writing Boolean expressions for various output variables in terms of input
variables.
6. Minimization of Boolean expressions.
7. Implementation of minimized Boolean expressions.
There are various simplification techniques available for minimizing Boolean
expressions, i.e., use of theorems and identities, Karnaugh-Mapping (K-map),
and the Quinne–McCluskey tabulation method.
Following points need to be noted:
1. The implementation should have minimum number of gates, with the
gates used having the minimum number of inputs.
2. There should be a minimum number of interconnections and the
propagation time should be the shortest.
3. The limitation on the driving capability of the gates should not be
ignored.

Check Your Progress


4. What is a k-map?
5. What are the different types of forms in k-map?
6. What are don’t care conditions?

Self-Instructional
78 Material
Simplification of
3.6 ANSWERS TO CHECK YOUR PROGRESS Expressions

QUESTIONS

1. The logical sum of two or more logical product terms, is called a Sum of NOTES
Products expression. It is basically an OR operation of AND operated
variables. A product of sums expression is a logical product of two or more
logical sum terms. It is basically an AND operation of OR operated variables.
2. A product term containing all the K variables of the function in either
complemented or uncomplemented form is called a Minterm.
3. A sum term containing all the K variables of the function in either
complemented or uncomplemented form is called a Maxterm.
4. A K-map is a matrix consisting of rows and columns that represent the
output values of a Boolean function.
5. K-map can be of two forms: SOP form and POS form.
6. If a circuit is designed so that a particular set of inputs can never happen,
we call this set of inputs a don’t care condition.

3.7 SUMMARY

 Logical functions are generally expressed in terms of logical variables. Values


taken on by the logical functions and logical variables are in the binary form.
 The logical sum of two or more logical product terms, is called a Sum of
Products expression. It is basically an OR operation of AND operated
variables.
 A product of sums expression is a logical product of two or more logical
sum terms. It is basically an AND operation of OR operated variables.
 A product term containing all the K variables of the function in either
complemented or uncomplemented form is called a Minterm.
 Canonical Sum of Product Expression is defined as the logical sum of all the
minterms derived from the rows of a truth table, for which the value of the
function is 1. It is also called a minterm canonical form.
 A sum term containing all the K variables of the function in either
complemented or uncomplemented form is called a Maxterm.
 Canonical product of sum expression is defined as the logical product of all
the maxterms derived from the rows of truth table, for which the value of
function is 0. It is also known as the maxterm canonical form.
 Karnaugh maps provide a systematic method to obtain simplified SOPs
Boolean expressions. This is a compact way of representing a truth table
and is a technique that is used to simplify logic expressions. K-map can be
of two forms: SOP form and POS form. Self-Instructional
Material 79
Simplification of  If a circuit is designed so that a particular set of inputs can never happen,
Expressions
we call this set of inputs a don’t care condition. They are helpful to us in K-
map circuit simplification.
 Combinational logic circuits are made up from individual logic gates only.
NOTES
They can also be considered as decision-making circuits. Combinational
logic is about combining logic gates together to process two or more signals
in order to produce at least one output signal according to the logical function
of each logic gate.

3.8 KEY WORDS

 Maxterm: A sum term containing all the K variables of the function in


either complemented or uncomplemented form is called Maxterm.
 Karnaugh map technique: This technique provides a systematic method
for simplifying and manipulating switching expressions.
 Logic diagram: It is a graphical representation of a logic circuit that shows
the wiring and connections of each individual logic gate represented by a
specific graphical symbol that implements the logic circuit.

3.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer questions


1. What do you understand by canonical sum of product and canonical product
of sum?
2. Discuss the minimization technique in k-map.
3. What are the rules of k-map simplification using SOP form?
4. Explain the Quine Mc-Cluskey method for simplification of Boolean
expressions.
Long Answer Questions
1. Obtain (a) minimal sum of product and (b) minimal product of sum
expressions for the function given below:
F(A, B, C, D) = m (0, 1, 2, 5, 8, 9, 10).
2. Define Don’t Care combinations. By which symbols are these combinations
represented?
3. Simplify Y= m (3,6, 7, 8, 10, 12, 14, 17, 19, 20, 21, 24, 25, 27, 28)
using the K-map method.

Self-Instructional
80 Material
Simplification of
3.10 FURTHER READINGS Expressions

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd. NOTES
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.

Self-Instructional
Material 81
Combinational Circuits
BLOCK II
COMBINATIONAL CIRCUITS AND
SEQUENTIAL CIRCUITS
NOTES

UNIT 4 COMBINATIONAL
CIRCUITS
Structure
4.0 Introduction
4.1 Objectives
4.2 Combinational Logic
4.3 Adders and subtractors
4.3.1 Full-Adder
4.3.2 Half-Subtractor
4.3.3 Full-Subtractor
4.4 Decoders
4.4.1 3-Line-to-8-Line Decoder
4.5 Encoders
4.5.1 Octal-to-Binary Encoder
4.6 Multiplexer
4.7 Demultiplexer
4.7.1 Basic Two-Input Multiplexer
4.7.2 Four-Input Multiplexer
4.8 Answers to Check Your Progress Questions
4.9 Summary
4.10 Key Words
4.11 Self Assessment Questions and Exercises
4.12 Further Readings

4.0 INTRODUCTION

Logic circuits whose outputs at any instance of time are entirely dependent on the
input signals present at that time are known as combinational circuits. A
combinational circuit has no memory characteristic as its output does not depend
upon any past inputs. A combinational logic circuit consists of input variables, logic
gates and output variables. The design of a combinational circuit starts from the
verbal outline of the problem and ends in a logic circuit diagram or a set of Boolean
functions from which the logic diagram can be easily obtained.
Clock pulse is the vibration of a quartz crystal located inside a computer
that helps in determining the speed of the computer’s processor in MHz or GHz
by counting each pulse. The function of an encoder is to convert decimal value to

Self-Instructional
82 Material
binary value. An encoder is a device that converts information from one format or Combinational Circuits

code to another. It saves memory space. A decoder is a device which does the
reverse of an encoder, undoing the encoding so that the original information can be
retrieved. Multiplexers are used to create digital semiconductors such as CPUs
and graphics controllers. You will also learn about a demultiplexer, which is the NOTES
inverse of the multiplexer, in that it takes a single data input and n address inputs.
It has 2n outputs. The address input determines which data output is going to have
the same value as the data input. The other data outputs will have the value 0.

4.1 OBJECTIVES

After going through this unit, you will be able to:


 Describe the basic operation of a half-adder
 Describe the basic operation of a full-adder
 Learn about adders and subtractors
 Understand the operation of a half-subtractor and a full-subtractor
 Know the functions of decoders and encoders
 Learn about multiplexers and demultiplexers

4.2 COMBINATIONAL LOGIC

The outputs of combinational logic circuits are only determined by their current
input state as they have no feedback, and any changes to the signals being applied
to their inputs will immediately have an effect at the output. In other words, in a
combination logic circuit, the input condition changes state so too does the output
as combinational circuits have no memory. Combination logic circuits are made up
from basic logic AND, OR or NOT gates that are combined or connected together
to produce more complicated switching circuits. As combination logic circuits are
made up from individual logic gates they can also be considered as decision making
circuits and combinational logic is about combining logic gates together to process
two or more signals in order to produce at least one output signal according to the
logical function of each logic gate. Common combinational circuits made up from
individual logic gates include multiplexers, decoders and demultiplexers, full and
half adders etc. One of the most common uses of combination logic is in multiplexer
and demultiplexer type circuits. Here, multiple inputs or outputs are connected to
a common signal line and logic gates are used to decode an address to select a
single data input or output switch. A multiplexer consists of two separate
components, a logic decoder and some solid state switches. Figure 4.1 shows the
hierarchy of combinational logic circuit.

Self-Instructional
Material 83
Combinational Circuits

NOTES

Fig. 4.1 Hierarchy of Combinational Logic

A sequential circuit uses flip flops. Unlike combinational logic, sequential


circuits have state, which means basically, sequential circuits have memory. The
main difference between sequential circuits and combinational circuits is that
sequential circuits compute their output based on input and state, and that the state
is updated based on a clock. Combinational logic circuits implement Boolean
functions, so they are functions only of their inputs, and are not based on clocks.
Combinational logic is considered as the easiest circuitry to design. The outputs
from a combinational logic circuit depend only on the current inputs. The circuit
has no remembrance of what it did at any time in the past. Much of logic design
involves connecting simple, easily understood circuits to construct a larger circuit
that performs a much more complicated function.

4.3 ADDERS AND SUBTRACTORS

An electronic device (combinational circuit), which performs arithmetic addition of


two bits is called a half-adder. It is an electronic device which can receive two
digital inputs representing AUGEND and ADDEND or CARRY and produces SUM
and CARRY outputs.
A half-adder has two inputs and two outputs. The two inputs are the two bit
members A and B, and the two outputs are the sum (S) of A and B and the carry
bit, denoted by C. The symbol for a half-adder is shown in Figure 4.2(a). The
logic diagram of a half-adder is shown in Figure 4.2(b). Figure 4.2(c) gives the
realization of the half-adder using five NAND gates.

Self-Instructional
84 Material
A Combinational Circuits
SUM = A + B
B
A S
Inputs Half-Adder Outputs
NOTES
B C CARRY = AB

(a) Symbol of Half-Adder


(b) Logic Diagram

SUM = A + B
A
= AB + BA
B

CARRY = AB

(c) Half-Adder using NAND Gates


Fig. 4.2 Half-Adder
A half-adder functions according to the truth table. We know that an AND
gate produces a high output only when both inputs are high, and the exclusive OR
gate produces a high output if either input, but not both is high. From the truth
table, the sum output corresponds to a logic XOR function, while the carry output
corresponds to AND function.
Let us examine each entry in Table 4.1. Half-adder does electronically what
we do mentally, when we add two bits.
Table 4.1 Truth Table for a Half-Adder

Inputs Outputs
Addend Augend Sum Carry
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
First entry : Inputs : A = 0 and B = 0
Human reponse : 0 plus 0 is 0 with a carry of 0.
Half-adder response : SUM = 0 and CARRY = 0
Second entry: Inputs : A = 1 and B = 0
Human response : 1 plus 0 is 1 with a carry of 0.
Half-adder response : SUM = 1 and CARRY = 0
Self-Instructional
Material 85
Combinational Circuits Third entry: Inputs : A = 0 and B = 1
Human response : 0 plus 1 is 1 with a carry of 0.
Half-adder response : SUM = 1 and CARRY = 0
NOTES Fourth entry: Inputs : A = 1 and B = 1
Human response : 1 plus 1 is 0 with a carry of 1.
Half-adder response : SUM = 0 and CARRY = 1
The SUM output represents the least significant bit (LSB) of the sum. The Boolean
expression for the two outputs can be obtained directly from the truth table.
Ssum = AB AB = ( A B )( A B) A B

C(carry) = AB = ( A B)( A B )( A B)
The implementation of the half-adder circuit using basic gates is shown in Figure
4.3.
A B

AB

S = AB + AB

AB

C = AB

Fig.4.3 Half-Adder using Basic Gates

4.3.1 Full-Adder
A half-adder has only two inputs and there is no provision to add a carry coming
from the lower order bits when multi-bit addition is performed. For this purpose,
a third input terminal is added and this circuit is used to add A, B and Cin.
A full-adder is a combinational circuit that performs the arithmetic sum of
three input bits and produces a SUM and a CARRY.
It consists of three inputs and two outputs. Two input variables denoted by
A and B, represent the carry from the previous lower significant position. Two
outputs are necessary because the arithmetic sum of three binary digits ranges
from 0 to 3, and binary 2 or 3 needs two digits. The outputs are designed by the
symbol S (for SUM) and Cout (for CARRY). The binary variable S gives the value
of the LSB (least significant bit) of the SUM. The binary variable Cout gives the
output CARRY.

Self-Instructional
86 Material
Combinational Circuits

NOTES
(a) Logic Symbol of Full-Adder

(b) Full-Adder using Two Half-Adders

(c) Logic Circuit of Full-Adder

Half-adder Half-adder
A
B Cin Sum

Cout

(d) Full-Adder using Two Half-Adders and an OR Gate

Fig. 4.4 Full-Adder Circuits


Self-Instructional
Material 87
Combinational Circuits The symbolic diagram for a full-addder is shown in Figure 4.4(a). A full-
adder is formed by using two half-adder circuits and an OR gate as shown in
Figure 4.4(b). Note the symbol S(sigma) for the sum. The full-adder circuit which
consists of three AND gates, an OR gate and a 3-input exclusive OR gate is
NOTES shown in Figure 4.4(c).
Table 4.2 shows the truth table of a full-adder. There are several possible
cases for the three inputs and for each case the desired output values are listed.
For example, consider the case A = 1, B = 0 and Cin = 1. The full-adder must add
these bits to produce a sum (S) of 0 and carry (Cout) of 1. The reader should
check the other cases to understand them. The full-adder can do more than a
million additions per second.
Table 4.2 Truth Table for a Full-Adder
Inputs Outputs
Augend bit Addend bit Carry bit Sum bit Carry bit
A B Cin  Output Cout
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
The logic expression of exclusive ORing of three variables A, B and Cin is,
A  B  C = ( AB  AB )  Cin

 
= ( AB  AB ) Cin  Cin ( AB  AB )

=  AB    AB  C
in  Cin ( AB  AB )

= ( A  B ) . ( A  B ) Cin  Cin ( AB  AB )
SUM = A  B  Cin = ABCin  ABCin  ABCin  ABCin
For A = 1, B = 0 and Cin = 1,
 = 1 . 0 .1 1 . 0 . 1 1. 0 . 1 1. 0 .1
=0 . 1 . 1+0 . 0 . 0+1 . 1 . 0+1 . 0 . 1=0
The sum of products for
Cout = ABCin  ABCin  ABCin  ABCin  ABCin  ABCin
= ABCin  ABCin  ABCin  ABCin  ABCin  ABCin
= BCin [ A  A]  ACin [ B  B]  AB[Cin  Cin ]
Cout = BCin + ACin + AB = AB + BCin + CinA

Self-Instructional For A = 1, B = 0 and Cin = 1, Cout = 1.0 + 0.1 + 1.1 = 1


88 Material
4.3.2 Half-Subtractor Combinational Circuits

A combinational circuit which is used to perform subtraction of two binary bits is


known as a half-subtractor.
The logic symbol of a half-subtractor is shown in Figure 4.5(a). It has two NOTES
inputs, A (minuend) and B (subtrahend) and two outputs D (difference) and C
(borrow out). It is made up of an XOR gate, a NOT gate and an AND gate.
[Figure 4.5(b)]. Subtraction of two binary numbers may be accomplished by taking
the complement of the subtrahend and adding it to the minuend; that is, the
subtraction becomes an addition operation. The truth table for half-subtraction is
given in Table 4.3. From the truth table, it is clear that the difference output is 0 if
A = B and 1 if A ± B; the borrow output C is 1 whenever A < B. If A is less than
B, then subtraction is done by borrowing, from the next higher order bit.
The Boolean expressions for difference (D) and carry (C) are given by,
D = AB AB = A  B
C = AB
A (A + B)
A D D
B
Half
Subtractor (A B)
B C C

(a) Logic Symbol (b) Logic Diagram


Fig. 4.5 Logic Symbol and Diagram of a Half-Subtractor

Table 4.3 shows the truth table for a half-subtractor:


Table 4.3 Truth Table for a Half-Subtractor

Inputs Outputs
Minuend Subtrahend Difference Borrow
A B D C
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0

4.3.3 Full-Subtractor
A full-subtractor is a combinational circuit that performs 3-bit subtraction.
The logic symbol of a full-subtractor is shown in Figure. 4.6(a). It has three
inputs, An (minuend), Bn (Subtrahend) and Cn–1 (borrow from previous state) and
two outputs D (difference) and Cn (borrow). The truth table for a full-subtractor is
given in Table 4.4.

Self-Instructional
Material 89
Combinational Circuits
An D
Bn Full
Subtractor
Cn–1 Cn

NOTES (a) Logic Symbol


Cn–1 D2 D
Half
Subtractor
An D1 2 C2
Half
Subtractor Cn
Bn 1 C1

(b) Logic Diagram using Two Half-Subtractors


Fig. 4.6 Formulation of a Full-Subtractor using Two Half-Subtractors
The full-subtractor can be accomplished by using two half-subtractors and
an OR gate as shown in Figure 4.6(b).
An

Bn G1 D
Cn–1

G1

G2 G4 Cn

G3

Fig. 4.7 Realization of a Full-Subtractor


Table 4.4 shows the truth table for a full-subtractor.
Table 4.4 Truth Table for a Full-Subtractor

Inputs Outputs
Minuend Subtrahend Subtrahend Difference Borrowout
An Bn Cn–1 D Cn
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1

Self-Instructional
90 Material
The minterms taken from the truth table gives the Boolean expression (SOP) Combinational Circuits

for difference D and is given by,


D =
An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1
Simplifying, D = ( An Bn An Bn )Cn ( An Bn An Bn )Cn
NOTES
1 1
= ( An Bn ) Cn 1 ( An Bn ) Cn 1
or, D = An  Bn  Cn–1
Similarly, the sum of product expression for Cn can be written from the truth
table as
C n = An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1

The equation for borrow after simplification by Karnaugh map is,


C n = An Bn ACn 1 BnCn 1

We notice that the equation for D is the same as the sum output for a full-
adder and the output Cn resembles the carry out for full-adder, except that An is
complemented. From these similarities, we understand that it is possible to convert
a full-adder into a full-subtractor by merely complementing An prior to its application
to the input of gates that form the borrow output as shown in Figure 4.8.
An Bn
Cn –1 00 01 11 10

0 0 1 0 0

1 1 1 1 0

Fig. 4.8 Karnaugh Map

Check Your Progress


1. What is combinational logic?
2. What is a half-adder?
3. Define full-adder.

4.4 DECODERS

Many digital systems require the decoding of data. Decoding is necessary in such
applications as data multiplexing, rate multiplying, digital display, digital-to-analog
converters and memory addressing. It is accomplished by matrix systems that can
be constructed from such devices as magnetic cores, diodes, resistors, transistors
and FETs.

Self-Instructional
Material 91
Combinational Circuits A decoder is a combinational logic circuit, which converts binary information
from n input lines to a maximum of 2n unique output lines such that each output line
will be activated for only one of the possible combinations of inputs. If the n-bit
decoded information has unused or don’t care combinations, the decoder output
NOTES will have fewer than 2n outputs.
A decoder is similar to demultiplexer, with one exception there is no data
input.
A single binary word n digits in length can represent 2n different elements of
information.
An AND gate can be used as the basic decoding element because its output
is HIGH only when all of its inputs are HIGH. For example, the input binary is
1011. In order to make sure that all of the inputs to the AND gate are HIGH when
binary number 1011 occurs, then the third bit (0) must be inverted.
If a NAND gate is used in place of the AND gate, a LOW output will
indicate the presence of the proper binary code.
4.4.1 3-Line-to-8-Line Decoder
Figure shows the reference matrix for decoding a binary word of 3 bits. In this
case, 3-inputs are decoded into eight outputs. Each output represents one of the
minterms of the 3-input variables. A 3-bit binary decoder whose control equations
are implemented in Figure 4.9. The operation of this circuit is listed in Table 4.5.
Table 4.5 Truth Table for 3-to-8 Line Decoder

Inputs Outputs
A B C D0 D1 D2 D3 D4 D5 D6 D7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1

Figure 4.9 shows the diagram of 3-line-to-8-line decoder.

Self-Instructional
92 Material
A B C Combinational Circuits

A B C

D0 = ABC NOTES

D1 = ABC

D2 = ABC

D3 = ABC

D4 = ABC

D5 = ABC

D6 = ABC

D7 = ABC

Fig. 4.9 A 3-Line-to-8-Line Decoder

4.5 ENCODERS

An encoder is a digital circui7t that performs the inverse operation of a decoder.


Hence, the opposite of the decoding process is called encoding. An encoder is
also a combinational logic circuit that converts an active input signal into a coded
output signal.
A0 O0
A1 O1
A2 O2
Encoder

AN–1 ON–1

n-inputs m-bit
only one HIGH output code
at a time

Fig. 4.10 Block Diagram of Encoder

Self-Instructional
Material 93
Combinational Circuits
An encoder has n input lines only one of which is active at any time and m output
lines. It encodes one of the active inputs such as a decimal or octal digit to a coded
output such as binary or BCD. Encoders can also be used to encode various
symbols and alphabetic characters. The process of converting from familiar symbols
NOTES or numbers to a coded format is called encoding. In an encoder, the number of
outputs is always less than the number of inputs. The block diagram of an encoder
is shown in Figure 4.10.
4.5.1 Octal-to-Binary Encoder
We know that binary-to-octal decoder (3-line-to-8-line decoder) accepts a 3-bit
input code and activates one of eight output lines corresponding to that code. An
octal-to-binary encoder (8-line-to-3-line encoder) performs the opposite function,
it accepts eight input lines and produces a 3-bit output code corresponding to the
activated input. The logic diagram and the truth table for an octal-to-binary encoder
is shown in Figure 4.11. It is implemented with three 4-input OR gates.The circuit
is designed so that when D0 is HIGH, the binary code 000 is generated, when D1
is HIGH, the binary code 001 is generated and so on.
D7 D6 D5 D4 D3 D2 D1 D0

Y0 = D1 + D3 + D5 + D7

Y1 = D2 + D3 + D6 + D7

Y2 = D4 + D5 + D6 + D7

Fig. 4.11 Logic Diagram of Octal-to-Binary Encoder

The design is made simple by the fact that only eight out of the total 2n possible
input conditions are used. Table 4.6 shows the truth table for octal-to-binary
encoder.
Table 4.6 Truth Table Octal-to-Binary Encoder

Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 Y2 Y1 Y0
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1

Self-Instructional
94 Material
Combinational Circuits
4.6 MULTIPLEXER

This type of encoder has ten inputs-one for each decimal input and four outputs
corresponding to the BCD code, as shown in Figure 4.12. The truth table for a NOTES
decimal-to-BCD encoder is given in Table 4.7. From the truth table, we can
determine the relationship between each BCD input and the decimal digits. For
example, the most significant bit of the BCD code, D is a 1 for decimal digit 8 or
9. The OR expression for bit D in terms of decimal digits can therefore the written
D=8+9
The output C is HIGH for decimal digits 4, 5, 6 and 7 and can be written as,
C =4 + 5 + 6 + 7
0
1
2 1

Decimal 3 2 BCD
input output
4 4
5 8
6
7

Fig. 4.12 Logic Symbol for a Decimal-to-BCD Converter

Similarly, B =2+ 3+6+ 7 and A=1+3+5+7+9


The above expressions for BCD outputs can be implemented using OR gates as
shown in Figure 4.12. The basic operation is as follows. When a HIGH appears
on one of the decimal digit input lines, the appropriate levels occur on the four
BCD output lines.
Table 4.7 Truth Table for Decimal-to-BCD Converter

Decimal Digit BCD code


D C B A
0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1

Self-Instructional
Material 95
Combinational Circuits 9 8 7 6 5 4 3 2 1 0

A (LSB)

NOTES
B

D (MSB)

Fig. 4.13 Logic Diagram for Decimal-to-BCD Converter

4.7 DEMULTIPLEXER

Multiplexer means ‘many into one’. Multiplexing is the process of transmitting a


large number of information units over a small number of channels or lines.
A digital multiplexer or a data selector (MUX) is a combinational circuit that
accepts several digital data inputs and selects one of them and transmits information
on a single output line.
Control lines are used to make the selection. The basic multiplexer has
several data input lines and a single output line. The selection of a particular line is
controlled by a set of selection lines. The block diagram of a multiplexer with n
input lines, m control signals and one output line is shown in Figure 4.14. A
multiplexer is also called a data selector since it selects one of many inputs and
steers the data to the output line.
The multiplexer acts like a digitally controlled multiplexer switch where the
digital code applied to the SELECT input controls which data inputs will be
switched to the output. A digital multiplexer has N inputs and only one output.
m-controls signals

Output
n-input signal
signals Multiplexer

Fig. 4.14 Block Diagram of Multiplexer


Self-Instructional
96 Material
4.7.1 Basic Two-Input Multiplexer Combinational Circuits

Figure 4.15 shows the basic 2 × 1 MUX. This MUX has two input lines A and B
and one ouput line Y. There is one select input lines. When the select input S = 0,
data from A is selected to the output line Y. If S = 1, data from B will be selected NOTES
to the output Y. The logic circuitry for a two-input MUX with data inputs A and B
and select input S is shown in Figure 4.15. It consists of two AND gates G1 and
G2, a NOT gate G3 and an OR gate G4. The Boolean expression for the output is
given by
Y = A S  BS
When the select line input S = 0, the expression becomes
Y = A .1 + B . 0 (Gate G1 is enabled)
which indicates that output Y will be identical to input signal A.
Similarly, when S = 1, the expression becomes
Y =A . 0+B . 1=B (Gate G2 is enabled)
showing that output Y will be identical to input signal B.
In many situations a strobe or enable input E is added to the select line S, as
shown in Figure 4.16. The multiplexer becomes operative only when the strobe
line E = 0.
Data select line S (Select line)
A AS
G1
Input
Output
lines G3
A G4 Y
2×1 Y
Input lines B
MUX Output line G2 BS
B
Y = AS + BS

(a) Block Diagram of 2 × 1 MUX (b) Logic Diagram

Fig. 4.15 Basic 2-Input Multiplexer

Figure 4.16 shows the logic diagram of 2-input multiplexer with strobe input.
S
Select Strobe or Enable
E
G5
A
G1
Input G3
lines
G4 Y
B
G2

Fig. 4.16 Logic Diagram of 2-Input Multiplexer with Strobe Input

Self-Instructional
Material 97
Combinational Circuits When the strobe input E is at logic 0, the NOT gate G5 is 1 and all AND gates G1
and G2 are enabled. Accordingly, when S = 0 and 1, inputs A and B are selected
as before. When the strobe input E = 1, all lines are disabled and the circuit will
not function.
NOTES
4.7.2 Four-Input Multiplexer
A logic symbol and diagram of a 4-input multiplexer are shown in Figure 4.17. It
has two data select lines S0 and S1 and four data input lines. Each of the four data
input lines is applied to one input of an AND gate.
Depending on S0 and S1 being 00, 01, 10 or 11, data from input lines A to D
are selected in that order. The Boolean expression for the output is given by the
Table 4.8.
Table 4.8 Truth Table for Function Table

Select Lines Output

S1 S0 Y

0 0 A
0 1 B
1 0 C
1 1 D

S1 S0

G5 G6
A
G1

B
Data select lines G2
S1 S0 Y
G7
C
G3
A
Input B 4×1 Output D
Y
lines C MUX G4
D

(a) Block Diagram of 4 × 1 Multiplexer (b) Logic Diagram

Fig. 4.17 Four-Input Multiplexer

Y = A S0 S1  BS 0 S1  CS0 S1  DS0 S1
If S0S1 = 00 (binary 0) is applied to data select lines, the data on input A appears
on the data output line.

Self-Instructional
98 Material
Combinational Circuits
Y =A . 1 . 1+ B . 0 . 1+ C . 1 . 0+D . 0 . 0
= A (Gate G1 is enabled)
Similarly, Y = BS0 S1 = B . 1 . 1 = B when S1S0 = 01 (Gate G2 is enabled)
Y = CS0 S1 = C . 1 . 1 = C when S1S0 = 10 (Gate G3 is enabled) NOTES
Y = DS0S1 = D . 1 . 1 = D when S1S0 = 11 (Gate G4 is enabled)
In a similar style, we can construct 8 × 1 MUXes, 16 × 1 MUXes, etc. Nowadays
two-, four-, eight- and 16-input multiplexes are readily available in the TTL and
CMOS logic families. These basic ICs can be combined for multiplexing a larger
number of inputs.
MultiplexerApplications: Multiplexer circuits find numerous applications in digital
systems. These applications include data selection, data rating, operation
sequencing, parallel to several conversion, waveform generation and logic function
generation.

Check Your Progress


4. Why is decoding used in a digital circuit?
5. Define encoder.
6. What is a digital multiplexer?
7. Define data distributor or demultiplxer.

4.8 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Combinational logic determines the logical outputs. These outputs are


determined by the logical function being performed and the logical input
states at that particular moment.
2. An electronic device (combinational circuit) which performs arithmetic
addition of two bits is called a half-adder.
3. A full-adder is a combinational circuit that performs the arithmetic sum of
three input bits and produces a sum and a carry.
4. Decoding is necessary in such applications as data multiplexing, rate
multiplying, digital display, digital-to-analog converters and memory
addressing.
5. An encoder is a digital circuit that performs the inverse operation of a decoder
and the opposite of the decoding process is called encoding. An encoder is
also a combinational logic circuit that converts an active input signal into a
coded output signal.

Self-Instructional
Material 99
Combinational Circuits
6. A digital multiplexer or a data selector (MUX) is a combinational circuit
that accepts several digital data inputs and selects one of them and transmits
information on a single output line.
NOTES 7. A demultiplexer is a combinational logic circuit that receives information on
a single line and transmits this information on one of the many output lines.

4.9 SUMMARY

 The outputs of combinational logic circuits are only determined by their


current input state as they have no feedback, and any changes to the signals
being applied to their inputs will immediately have an effect at the output.
 Common combinational circuits made up from individual logic gates include
multiplexers, decoders and demultiplexers, full and half adders etc. One of
the most common uses of combination logic is in multiplexer and demultiplexer
type circuits.
 A sequential circuit uses flip flops. Unlike combinational logic, sequential
circuits have state, which means basically, sequential circuits have memory.
The main difference between sequential circuits and combinational circuits
is that sequential circuits compute their output based on input and state, and
that the state is updated based on a clock.
 An electronic device (combinational circuit), which performs arithmetic
addition of two bits is called a half-adder.
 A half-adder has only two inputs and there is no provision to add a carry
coming from the lower order bits when multi-bit addition is performed. For
this purpose, a third input terminal is added and this circuit is used to add A,
B and Cin.
 Decoding is necessary in such applications as data multiplexing, rate
multiplying, digital display, digital-to-analog converters and memory
addressing.
 An encoder is a digital circuit that performs the inverse operation of a
decoder. Hence, the opposite of the decoding process is called encoding.
 Multiplexer means ‘many into one’. Multiplexing is the process of transmitting
a large number of information units over a small number of channels or
lines.

4.10 KEY WORDS

 Full-adder: A combinational circuit that performs the arithmetic sum of


three input bits and produces a SUM and a CARRY.
 Half-subtractor: A combinational circuit which is used to perform
Self-Instructional subtraction of two binary bits.
100 Material
 Full-subtractor: A combinational circuit that performs 3-bit subtraction. Combinational Circuits

 Multiplexer: It acts like a digitally controlled multiposition switch where


the digital code applied to the SELECT input controls which data inputs
will be switched to the output.
NOTES

4.11 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. Write briefly on the operation of a full adder.
2. Draw the logic symbol and diagram of a half-subtractor.
3. Write the Boolean expressions for the outputs of a full-adder.
4. What is a basic two-input multiplexer?
5. Define 1-to-4 demultiplxer.
Long Answer Questions
1. Draw the circuit design for full-adder.
2. Why is 3-line-to-8-line decoder used in circuit design?
3. Draw the block diagram of an encoder.
4. Draw the logic block diagram of a full-subtractor using two half-subtractors.
Write its truth table.
5. Draw the block diagram of a multiplexer.
6. Draw the logic diagram of a 1:4 demultiplexer.

4.12 FURTHER READINGS

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.

Self-Instructional
Material 101
Sequential Circuits

UNIT 5 SEQUENTIAL CIRCUITS


NOTES Structure
5.0 Introduction
5.1 Objectives
5.2 Flip-flops
5.2.1 S-R Flip-Flop
5.2.2 D Flip-Flop
5.2.3 J-K Flip-Flop
5.2.4 T Flip-Flop
5.2.5 Master–Slave Flip-Flops
5.3 Registers
5.3.1 Shift Registers Basics
5.3.2 Serial In/Serial out Shift Registers
5.3.3 Serial In/Parallel Out Shift Registers
5.3.4 Parallel In/Serial Out Shift Registers
5.3.5 Parallel In/Parallel out Registers
5.4 Counters
5.4.1 Asynchronous Counter Operations
5.4.2 Synchronous Counter Operations
5.4.3 Design of Synchronous Counters
5.5 Answers to Check Your Progress Questions
5.6 Summary
5.7 Key Words
5.8 Self Assessment Questions and Exercises
5.9 Further Readings

5.0 INTRODUCTION

In this unit, you will learn that digital systems can be either asynchronous or
synchronous. Synchronous sequential circuits can change their states only when
clock signals are present. Clock circuits produce rectangular or square waveforms.
You will also learn that a flip flop has a clock input called clocked flip-flop. A
clocked flip-flop is characterized by the fact that it changes states only in
synchronization with the clock pulse.
You will learn about the registers and counters. A register is a group of flip-
flops suitable for storing binary information. Each flip-flop is a binary cell capable
of storing one bit of information. An n-bit register has a group of n flip-flops and is
capable of storing any binary information containing n bits. The register is mainly
used for storing and shifting binary data entered into it from an external source. A
counter, by function, is a sequential circuit consisting of a set of flip-flops connected
in a special manner to count the sequence of the input pulses received in digital
form. Counters are fundamental components of digital system. Digital counters
find wide application like pulse counting, frequency division, time measurement
Self-Instructional and control and timing operations.
102 Material
Sequential Circuits
5.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the concept of flip-flops NOTES
 Describe the triggering of flip-flops
 Define counters
 Explain the different types of counters
 Understand the concept of shift registers
 Describe the types of shift registers

5.2 FLIP-FLOPS

Synchronous circuits change their states only when clock pulses are present. The
operation of the basic latch can be modified, by providing an additional control
input that determines, when the state of the circuit is to be changed. The latch with
the additional control input is called the flip-flop. The additional control input is
either the clock or enable input.
Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types, namely, S-
R, J-K, D and T flip-flops.
5.2.1 S-R Flip-Flop
The S-R flip-flop consists of two additional AND gates at the S and R inputs of S-
R latch as shown in Figure 5.1.

Fig. 5.1 Block Diagram of S-R Flip-Flop

In this circuit, when the clock input is LOW, the output of both the AND
gates are LOW and the changes in S and R inputs will not affect the output (Q ) of
the flip-flop. When the clock input becomes HIGH, the value at S and R inputs
will be passed to the output of the AND gates and the output (Q ) of the flip-flop
will change according to the changes in S and R inputs as long as the clock input is
HIGH. In this manner, one can strobe or clock the flip-flop so as to store either a
1 by applying S = 1, R = 0 (to set) or a 0 by applying S = 0, R = 1 (to reset) at any
time and then hold that bit of information for any desired period of time by applying
a LOW at the clock input. This flip-flop is called clocked S-R flip-flop. Self-Instructional
Material 103
Sequential Circuits The S-R flip-flop which consists of the basic NOR latch and two AND
gates is shown in Figure 5.2.

NOTES

Fig. 5.2 Clocked NOR-Based S-R Flip-Flop

The S-R flip-flop which consists of the basic NAND latch and two other
NAND gates is shown in Figure 5.3. The S and R inputs control the state of the
flip-flop in the same manner as described earlier for the basic or unclocked S-R
latch. However, the flip-flop does not respond to these inputs until the rising edge
of the clock signal occurs. The clock pulse input acts as an enable signal for the
other two inputs. The outputs of NAND gates 1 and 2 stay at the logic 1 level as
long as the clock input remains at 0. This 1 level at the inputs of NAND-based
basic S-R latch retains the present state, i.e., no change occurs. The characteristic
table of the S-R flip-flop is shown in truth table of Table 5.1 which shows the
operation of the flip-flop in tabular form.

(a) NAND-based S-R Flip-Flop (b) Graphic Symbol

Fig. 5.3 NAND Based S-R Flip-Flop


Table 5.1 Characteristic Truth Table of S-R Flip-Flop

Present State Clock Pulse Data Inputs Next State Action


Qn CLK S R Qn+1
0 0 0 0 0 No change
1 0 0 0 1 No change
0 1 0 0 0 No change
1 1 0 0 1 No change
0 0 0 1 0 No change
1 0 0 1 1 No change
0 1 0 1 0 Reset
1 1 0 1 0 Reset
0 0 1 0 0 No change
1 0 1 0 1 No change
0 1 1 0 1 Set
1 1 1 0 1 Set
0 0 1 1 0 No change
1 0 1 1 1 No change
0 1 1 1 ? Forbidden
1 1 1 1 ? Forbidden
Self-Instructional
104 Material
5.2.2 D Flip-Flop Sequential Circuits

The D (delay) flip-flop has only one input called the Delay (D ) input and two
outputs Q and Q . It can be constructed from an S-R flip-flop by inserting an
inverter between S and R and assigning the symbol D to the S input. The structure NOTES
of D flip-flop is shown in Figure 5.4(a). Basically, it consists of a NAND flip-flop
with a gating arrangement on its inputs. It operates as follows:
1. When the CLK input is LOW, the D input has no effect, since the set and
reset inputs of the NAND flip-flop are kept HIGH.
2. When the CLK goes HIGH, the Q output will take on the value of the D
input. If CLK =1 and D =1, the NAND gate-1 output goes 0 which is the
S input of the basic NAND-based S-R flip-flop and NAND gate-2 output
goes 1 which is the R input of the basic NAND-based S-R flip-flop.
Therefore, for S = 0 and R = 1, the flip-flop output will be 1, i.e., it follows
D input. Similarly, for CLK=1 and D = 0, the flip-flop output will be 0. If D
changes while the CLK is HIGH, Q will follow and change quickly.
The logic symbol for the D flip-flop is shown in Figure 5.4(b). A simple way
of building a delay D flip-flop is shown in Figure 5.4(c). The truth table of D flip-
flop is given in Table 5.2 from which it is clear that the next state of the flip-flop at
time (Q n 1 ) follows the value of the input D when the clock pulse is applied. As
transfer of data from the input to the output is delayed, it is known as Delay (D )
flip-flop. The D-type flip-flop is either used as a delay device or as a latch to store
1 bit of binary information.

(a) Using NAND Gates (b) Logic Symbol (c) Using S-R Flip-Flop

Fig. 5.4 D Flip-Flop

Table 5.2 Truth Table of D Flip-Flop

CLK Input Output


D Qn+1
1 0 0
1 1 1
0 X No change

State Diagram and Characteristic Equation of D Flip-Flop


The state transition diagram for the delay flip-flop is shown in Figure. 5.5.
Self-Instructional
Material 105
Sequential Circuits

NOTES

Fig. 5.5 State Diagram of Delay Flip-Flop

From the above state diagram, it is clear that when D =1, the next state will
be 1; when D = 0, the next state will be 0, irrespective of its previous state. From
the state diagram, one can draw the Present state–Next state table and the
application or excitation table for the Delay flip-flop as shown in Table 5.3 and
Table 5.4 respectively.
Table 5.3 Present State–Next State Table for D Flip-Flop

Present State Delay Input Next State


Qn D Qn 1
0 0 0
0 1 1
1 0 0
1 1 1

Table 5.4 Application or Excitation Table for D Flip-Flop

Qn Qn 1
Excitation Input
D
0 0 0
0 1 1
1 0 0
1 1 1

Using the Present state–Next state table, the K-map for the next state (Q n 1 )
of the Delay flip-flop can be drawn as shown in Figure 5.6 and the simplified
expression for Q n 1 can be obtained as described below..

Fig. 5.6 Next State ( Q n 1


) Map for D Flip-Flop

From the above K-map, the characteristic equation for Delay flip-flop is,
Qn 1 D
Self-Instructional
106 Material
Hence, in a Delay flip-flop, the next state follows the Delay input as Sequential Circuits

represented by the characterisitic equation.


5.2.3 J-K Flip-Flop
A J-K flip-flop has a characteristic similar to that of an S-R flip-flop. In addition, NOTES
the indeterminate condition of the S-R flip-flop is permitted in it. Inputs J and K
behave like inputs S and R to set and reset the flip-flop, respectively. When J = K =
1, the flip-flop output toggles, i.e., switches to its complement state; if Q = 0, it
switches to Q =1 and vice versa.
A J-K flip-flop can be obtained from the clocked S-R flip-flop by augmenting
two AND gates as shown in Figure 5.7(a). The data input J and the output Q are
applied to the first AND gate, and its output ( J Q) is applied to the S input of S-R
flip-flop. Similarly, the data input K and the output Q are connected to the second
AND gate and its output (KQ ) is applied to R input of S-R flip-flop. The graphic
symbol of J-K flip-flop is shown in Figure 5.7(b) and the truth table is shown in
Table 5.5. The output for the four possible input sequences are as follows.

(a) J-K Flip-Flop using S-R Flip-Flop (b) Graphic Symbol of J-K Flip-Flop

Fig. 5.7 J-K Flip-Flop

Table 5.5 Truth Table of J-K Flip-Flop

CLK Inputs Output


J K Q n+1 Action
X 0 0 Qn No change
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 Qn Toggle

State Diagram and Characteristic Equation of J-K Flip-Flop


The state transition diagram for J-K flip-flop can be drawn as shown in Figure
5.8.

Fig. 5.8 State Diagram of J-K Flip-Flop


Self-Instructional
Material 107
Sequential Circuits From the above state diagram, one can easily understand that the state
transition from 0 to 1 takes place whenever J is asserted (i.e., J =1 ) irrespective
of K value. Similarly, state transition from 1 to 0 takes place whenever K is asserted
(i.e., K = 1) irrespective of the value of J. Also, the state transition from 0 to 0
NOTES occurs whenever J = 0, irrespective of the value of K and the state transition from
1 to 1 occurs whenever K = 0, irrespective of J value.
From the above state diagram and truth table (Table 5.5) of J-K flip-flop,
the Present state–Next state table and application table or excitation table for J-K
flip-flop are shown in Table 5.6 and Table 5.7, respectively.
Table 5.6 Present State–Next State Table for J-K Flip-Flop

Present State Inputs Next State


Qn J K Qn+1
0 0 0 0
0 0 1 0
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0

Table 5.7 Application or Excitation Table for J-K Flip-Flop

Qn Qn+1 Excitation Inputs


J K
0 0 0 d
0 1 1 d
1 0 d 1
1 1 d 0

From the Table 5.6, a Karnaugh map (K-Map) for the next state transition
(Q n 1 ) can be drawn as shown in Figure 5.9 and the simplified logic expression
which represents the characteristic equation of J-K flip-flop can be obtained as
follows.
From the K-map shown in Figure 5.9, the characteristic equation of J-K
flip-flop can be written as,
Qn 1 JQ n KQ n

Fig. 5.9 Next–State ( Q n 1


) K-Map for J-K Flip-Flop
Self-Instructional
108 Material
5.2.4 T Flip-Flop Sequential Circuits

Another basic flip-flop, called the T or Trigger or Toggle flip-flop, has only a
single data (T) input, a clock input and two outputs Q and Q. The T--type flip-flop
is obtained from a J-K flip-flop by connecting its J and K inputs together. The NOTES
designation T comes from the ability of the flip-flop to ‘toggle’ or complement its
state.
The block diagram of a T flip-flop and its circuit implementation using a J-
K flip-flop are shown in Figure 5.10. The J and K inputs are wired together. The
truth table for T flip-flop is shown in Table 5.8.

(a) Block Diagram of T Flip-Flop (b) T Flip-Flop using a J-K Flip Flop

Fig. 5.10 T Flip-flop

When the T input is in the 0 state (i.e., J = K = 0) prior to a clock pulse, the
Q output will not change with clocking. When the T input is at 1(i.e., J = K = 1)
level prior to clocking, the output will be in the Q state after clocking. In other
words, if the T input is a logical 1 and the device is clocked, then the output will
change state regardless of what output was prior to clocking. This is called Toggling
hence the name T flip-flop is given.
Table 5.8 Truth Table of T Flip-Flop

Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0

The above truth table shows that when T = 0, then Q n 1 =Q n , i.e., the next
state is the same as the present state and no change occurs. When T = 1, then
Q n 1 = Q n , i.e., the state of the flip-flop is complemented.

Application of T flip-flop: T-type flip-flop is most often seen in counters and


sequential counting networks because of its inherent divide-by-2 capability. When
a clock pulse is applied, then the output changes state once every input cycle, thus
repeating one cycle for every two input cycles. This is the action required in many
cases for binary counters.
State Diagram and Characteristic Equation of T Flip-Flop
The state transition diagram for the Trigger flip-flop is shown in Figure 5.11.

Self-Instructional
Material 109
Sequential Circuits

NOTES

Fig. 5.11 State Diagram of Trigger Flip-Flop

From the above state diagram, it is clear that when T = 1, the flip-flop
changes or toggles its state irrespective of its previous state. When T = 1 and
Q n = 0, the next state will be 1 and when T = 1 and Q n = 1, the next state will be
0. Similarly, one can understand that when T = 0, the flip-flop retains its previous
state. From the above state diagram, one can draw the Present state–Next state
table and application or excitation table for the Trigger flip-flop as shown in Table
5.9 and Table 5.10, respectively.
Table 5.9 Present State–Next State Table for T Flip-Flop

Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0

Table 5.10 Application or Excitation Table for T Flip-Flop

Qn Qn+1 Excitation Input


T
0 0 0
0 1 1
1 0 1
1 1 0

From the Table 5.9, the K-map for the next state (Q n 1 ) of Trigger flip-flop
can be drawn as shown in Figure. 5.12 and the simplified expression for Q n 1 can
be obtained as follows.

Fig. 5.12 Next State ( Q n 1


) Map for T Flip-Flop

From the K-map shown in Figure 5.12, the characteristic equation for Trigger
flip-flop is,
Qn 1 TQ n TQ n

So, in a Trigger flip-flop, the next state will be the complement of the previous
state when T = 1.
Self-Instructional
110 Material
5.2.5 Master–Slave Flip-Flops Sequential Circuits

A Master–Slave flip-flop can be constructed using two J-K flip-flops as shown in


Figure 5.13. The first flip-flop, called the Master, is driven by the positive edge of
the clock pulse; the second flip-flop, called the Slave, is driven by the negative NOTES
edge of the clock pulse. Therefore, when the clock input has a positive edge, the
master acts according to its J-K inputs, but the slave does not respond since it
requires a negative edge at the clock input. When the clock input has a negative
edge, the slave flip-flop copies the master outputs. But the master does not respond
to the feedback from Q and Q , since it requires a positive edge at its clock input.
Thus, the Master–Slave flip-flop does not have race around problem.

Fig. 5.13 Master–Slave J-K Flip-Flop

A Master–Slave J-K flip-flop constructed using NAND gates is shown in


Figure 5.13. It consists of two flip-flops connected in series. NAND gates-1
through 4 form the master flip-flop and NAND gates-5 through 8 form the slave
flip-flop. When the clock is positive, a change in J and K inputs cause a change of
state in the master flip-flop. During this period, the slave retains its previous state
and serves as a buffer between the master and the output. When the clock goes
negative, the master flip-flop does not respond, i.e., it maintains its previous state,
while the slave flip-flop is enabled and changes its state to that of the master flip-
flop. The new state of the slave then becomes the state of the entire Master–Slave
flip-flop. The operation of Master–Slave J-K flip-flop for different J-K input
combinations can be explained as follows:

Fig. 5.14 Clocked Master–Slave J-K Flip-Flop using NAND Gates

If J = 1 and K = 0, the master flip-flop sets on the positive clock edge. The
HIGH Q (1) output of the master drives the input ( J ) of the slave. So, when the
Self-Instructional
Material 111
Sequential Circuits negative clock edge hits, the slave also sets. The slave flip-flop copies the action
of the master flip-flop.
If J = 0 and K = 1, the master resets on the leading edge of the CLK pulse.
NOTES The HIGH Q output of the master drives the input (K) of the slave flip-flop. Then,
the slave flip-flop resets at the arrival of the trailing edge of the CLK pulse. Once
again, the slave flip-flop copies the action of the master flip-flop.
If J = K = 1, the master flip-flop toggles on the positive clock edge and the
slave toggles on the negative clock edge. The condition J = K = 0 input does not
produce any change.
Master–Slave flip-flops operate from a complete clock pulse and the outputs
change on the negative transition.

Check Your Progress


1. Define a flip-flop.
2. What are the different types of flip-flops?
3. How will you obtain the T-type flip-flop from J-K flip-flop?

5.3 REGISTERS

A register is a group of flip-flops used to store or manipulate data or both. Each


flip-flop is capable of storing one bit of information. An n-bit register has n flip-
flop and is capable of storing any binary information containing n-bits.
The register is a type of sequential circuit and an important building block
used in digital system like multiplies, dividers, memories, microprocessors, etc.
A register stores a sequence of 0’s and l’s. Register that are used to store
information are known as memory registers. If they are used to process
information, they are called shift registers.
5.3.1 Shift Registers Basics
A shift register is a group of FFs arranged so that the binary numbers stored in the
FFs are shifted from one FF to the next for every clock pulse.
Shift registers often are used to store data momentarily. Figure 5.15 shows
a typical example of where shift registers might be used in a digital system
(calculator). Notice that use of shift registers to hold information from the encoder
for the processing unit. A shift register is also being used for temporary storage
between the processing unit and the decoder. Shift registers are also used at other
locations within a digital system.

Self-Instructional
112 Material
Sequential Circuits
7 8 9

Processing

Decoder
Encoder

Register

Register
4 5 6

Shift

Shift
Uint
1 2 3
0
NOTES
Fig. 5.15 Block Diagram of a Digital System using Shift Registers

There are two modes of operation for registers. The first operation is series
or serial operation. The second type of operation is parallel shifting. Input and
output functions associated with registers include (1) serial input/serial output (2)
serial input/parallel output (3) parallel input/parallel output (4) parallel input/serial
output.
Hence input data are presented to registers in either a parallel or a serial
format.
To input parallel data to a register requires that all the flip-flops be affected
(set or reset) at the same time. To output parallel data requires that the flip-flop Q
outputs be accessible. Serial input data loading requires that one data bit at a time
is presented to either the most or least significant flip-flop. Data are shifted from
the flip-flop initially loaded to the neat one in series. Serial output data are taken
from a single flip-flop, one bit at a time.
Serial data input or output operations require multiple clock pulses. Parallel
data operations only take one clock pulse. Data can be loaded in one format and
removed in another. Two functional parts are required by all shift registers: (1)
data storage flip-flops and (2) logic to load, unload and shift the stored information.
The block diagrams of four basic register types is shown in Figure 5.16.
Registers can be designed using-discrete flip-flops (S-R J-K and D-type). Registers
are also available as MSI.

Serial
data n-bit
input
Serial Serial
data n-bit data
input output

MSB LSB
Parallel data outputs
(a) Serial In/Serial Out (b) Serial In/Parallel Out
Parallel data inputs

Parallel data inputs

n-bit
Serial data
n-bit output

Parallel in/Serial out


Parallel data outputs
(c) Parallel In/Serial Out (d)
Fig. 5.16 Register Types Self-Instructional
Material 113
Sequential Circuits 5.3.2 Serial In/Serial Out Shift Registers
This type of shift register accept data serially-that is, one bit at a time on a single
line. It produces the stored information on its output also in serial form. Data may
NOTES be shifted left (from low-to high order bits) using shift-left register or shifted
right (from high to low order bits) using a shift right register.
Shift Left Register
A shift left register can be built using D FFs or J-K FFs as shown in Figure 5.17.
A J-K FF register requires connection of both J and K inputs, input data are
connected to the right most (lowest order) stage with date being shift bit-by-bit to
the left.
D C B A
Q J Q J Q J Q J

Input
data
D C B A
Q K Q K Q K Q K

Shift pulses (four)


(a) J-K Type
Serial
D C B A
Q D Q D Q D Q D Input
Serial data
output
data
Q Q Q Q

Shift pulses

(b) D Type

Fig. 5.17 Shift-Left Registers J-K, and D Types

For register of Figure 5.17 (b) using D FFs, a single data line is connected
between states, again, 4 shift pulse are required to shift a 4-bit word into the 4-
stage register.
The shift pulse is applied to each stage, operating each simultaneously. When
the shift pulse occurs, the date input is shifted into that stage. Each stage is set or
reset corresponding to the input data at the time of shift pulse occurs. Thus the
input data bit is shifted into stage A by the first shift pulse. At the same time the
data of stage A is shifted into stage B, and so on for the following stages. For each
shift pulse, data stored in the register stages shift left by one stage. News data are
shifted into stage A, where as the data present in stage D are shifted out (to the
left) for use by some other shift register or computer unit.
For example, consider starting with all stages reset and applying a steady
logical-1 input a data input to stage A. The data in each stage after each of four
shift pulses is shown in Table 5.11. Notice in Table 5.11 that the logical-1 input
shifts into stage A and the shifts left to stage D after four shift pulses.
As another example, consider shifting of alternate 0 and 1 data into stage A
starting all stages logical-1. Table 5.11 shows the data in each stage after each of
Self-Instructional
four shift pulses.
114 Material
Table 5.11 Operation of Shift-Left Register Sequential Circuits

Shift Pulse D C B A
0 0 0 0 0
1 0 0 0 1 NOTES
2 0 0 1 1
3 0 1 1 1
4 1 1 1 1

As a third example of shift register operation, consider starting with the count
starting with the count in step 4 of Table 5.12 and applying four more shift pulses
while placing a steady logical-0 input as data input to stage A. This is shown in
Table 5.13.
Table 5.12 Shift- Register Operation Table 5.13 Final Stage
Shift Pulse D C B A Shift Pulse D C B A
0 1 1 1 1 0 0 1 0 1
1 1 1 1 0 1 1 0 1 0
2 1 1 0 1 2 0 1 0 0
3 1 0 1 0 3 1 0 0 0
4 0 1 0 1 4 0 0 0 0

Shift Right Register


A shift-right register can also be built using D FFs of J-K FFs as shown in
Figure 5.18. Let us illustrate the entry of the 4-bit binary number 1101 into the
register, beginning with the right most bit. The 1 is put into the date input line,
making D = 1 for stage D. When the first clock pulse is applied, FF A is SET, thus
storing the 1. Next the 0 is applied to the date input, making D = 0 for FF B
because D (input) of FF B is connected to the QA output.
Data
input QA QB QC QD
D Q D Q D Q D Q
Data
output

Q Q Q Q

CLK
(a)
QA QB QC QD
J Q J Q J Q J Q
Serial
data
input
K Q K Q K Q K Q

CLK
(b)
Fig. 5.18 J-K Flip-Flops in Shift Right Register

When the second clock pulse occurs, the 0 on the data input is “shifted” into the
FF A because FF A RESETs, and the 1 that was in FF A is “shifted” into FF B.
The next 1 in the binary number is now put onto the data-input line, and a clock
pulse is applied. The l is entered into FF A, the 0 stored in FF A is shifted into FF
B, and the l stored in FF B is shifted into FF C. The last bit in the binary number, Self-Instructional
Material 115
Sequential Circuits a l, is now applied to the data input, and a clock pulse is applied. This time the l is
entered into FF A, the l stored in FF A is shifted into FF B, the 0 stored in FF B is
shifted into FF C, and the l stored in FF C is shifted into FF D. This completes the
serial entry of the 4-bit binary number into the shift register, where it can be stored
NOTES for any amount of time. Table 5.15 shows the action of shifting all logical-l inputs
into an initially reset shift register. Table 5.14 shows the register operation for the
entry of 1101.
Table 5.14 Register Operation Table 5.15 Shifting Logical Inputs
Shift Pulse QA QB QC QD Shift Pulse QA QB QC QD
0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 1 1 0 0 0
2 0 1 0 0 2 1 1 0 0
3 1 0 1 0 3 1 1 1 0
4 1 1 0 1 4 1 1 1 1
The waveforms shown in Figure 5.19 illustrate the entry of 4-bit number 0100.
For a J-K FF, the data bit to be shifted into the FF must be present at the J and
K inputs when the clock transitions (low or high). Since the data bit is either a l or
a 0, there are two cases:
1. To shift a 0 into the FF, J = 0 and K = 1,
2. To shift a l into the FF, J = 1 and K = 0,
At time A : All the FFs are reset. The FF output just after time A are QRST =
0000.
At time B : The FFs all contain 0s, the FF outputs are QRST = 0000.
A B C D
Time

Clock 0

J
0
Serial
data
input
K 0

Q
0 0

1
R
0

S 0
0

T 0
0

Self-Instructional Fig. 5.19 Waveforms of 4-Bit Serial Input Shift Register


116 Material
At time C : The FFs still all contain 0s. The FF output after time C are QRST = 1000. Sequential Circuits

At time D : The FF output are QRST = 0100.

5.3.3 Serial In/Parallel Out Shift Registers


NOTES
The logic diagram of a 4-bit serial-in-parallel-out shift register is shown in
Figure 5.20. It has one input and the number of output pins is equal to the number
of FFs in the register. In this register data is entered serially but shifted out in
parallel. In order to shift the data out in parallel, it is necessary to have all the data
available at the outputs at the same time. Once the data are stored, each bit appears
on its respective output and all bits are available simultaneously, rather than on a
bit-by-bit basis as with the serial output.
Data
D QA D QB D QC D QD
input
A B C D

CLK input
QA QB QC QD
(a) Logic Diagram

Data input
D SRG 4

CLK

QA QB QC QD
(b) Logic Symbol

Fig. 5.20 A Serial-In-Parallel-Out Shift Register

5.3.4 Parallel In/Serial Out Shift Registers


For a register with parallel data inputs, the bits are entered simultaneoulsy into
their respective stages on parallel lines rather than on a bit-by-bit basis on one line.
A 4-bit parallel-in-serial-out shift register is illustrated in Figure 5.21. It has
four data-input lines A, B, C and D and a SHIFT/ LOAD input. SHIFT/ LOAD is
a control input that allows four bits of data to enter the register in parallel or shift
the data in serial.
When SHIFT/ LOAD is LOW, AND gates G111 through G3 are enabled,
allowing each data bit to be applied to the D input of its respective FF. When a
clock pulse is applied, the FFs with D = 1 will SET and those with D = 0 will
RESET, thereby storing all four bits simultaneously.

Self-Instructional
Material 117
Sequential Circuits
When SHIFT/ LOAD is HIGH, AND gates through G1 through G3 are
disabled and AND gates G4 through G6 are enabled, allowing the data bits to shift
right from one stage to the next. The OR gates allow either the normal shifting
operation or the parallel data-entry operation, depending on which AND gates
NOTES
are enabled by the level on the SHIFT/ LOAD input.
A B C D

SHIFT/LOAD

G4 G1 G5 G2 G6 G3

D QA D QB D QC D QD
Serial
A B C D data out

CLK
(a) Logic Diagram
Data in
A B C D

SHIFT/LOAD
Data out
SRG 4
CLK
(b) Logic Symbol

Fig 5.21 A 4-Bit Parallel-In-Serial-Out Shift Register

5.3.5 Parallel In/Parallel out Registers


In this type of register, data inputs can be shifted either in or out of the register in
parallel. It has four inputs and four outputs. In this register, there is no inter
connection between successive FFs since no serial shifting is required. Therefore,
the moment the parallel entry of the input data is accomplished, the respective bits
will appear at the parallel outputs.
The logic diagram of a 4-bit parallel-in-parallel-out shift register is shown in
Figure 5.22. Let A, B, C and D be the inputs applied directly to delay (D) inputs
of respective FFs. Now on applying a clock pulse, these inputs are entered into
the register and are immediately available at the outputs QA, QB, QC and QD.

Self-Instructional
118 Material
A B C D Sequential Circuits

D QA D QB D QC D QD
A B C D
NOTES

CLK
QA QB QC QD

Fig. 5.22 Logic Diagram of a 4-Bit Parallel-In-Parallel-Out Shift Register

5.4 COUNTERS

In addition to functioning as a frequency divider, the circuit of Figure 5.23(a)


operates as a binary counter. Here J-K flip-flops are negative edge-triggered.
Flip-flops are initially RESET. Let Q 2 Q 1 Q 0 be a binary number where Q2 is the
2 2 position, Q 1 is the 21 position and Q 0 is the 2 0 position. The first eight states
of Q 2 Q 1 Q 0 in the timing diagram should be recognised as the binary counting
sequence from 000 to 111. After the first clock pulse, the flip-flops are in the 001
state, i.e., Q2 = 0, Q 1 = 0 and Q0 = 1, which represents 0012 (=110); after the
second CLK pulse, the flip-flops represent 010 2 (= 210 ); after the third pulse,
0112 (= 310 ) and so on until after seven CLK pulses, 1112 (= 710 ). On the eighth
pulse, the flip-flops return to the 000 state, and the binary sequence repeats itself
after every eight clock pulses as shown in the timing diagram of Figure 5.23(b).
Thus, the flip-flops count in sequence from 0 to 7 and then recycle back to 0 to
begin the sequence again.

Fig. 5.23 J-K Flip-Flops Wired as 3-Bit Binary Counter Self-Instructional


Material 119
Sequential Circuits 5.4.1 Asynchronous Counter Operations
Figure 5.24 shows a 4 bit binary ripple counter using J-K flip-flops. The clock
signal is connected to the clock input of only first stage flip-flop A, i.e., the least
NOTES significant bit stage of the counter, the output of A drives B, and the output of B
drives flip-flop C and the output of C drives flip-flop D. The triggers move through
the flip-flops like a ripple. Hence this counter is known as a ripple counter. All the
J and K inputs are tied to VCC (1) which means that each flip-flop toggles on the
negative edge of its clock input. With four binary places (QD, QC, QB and QA), we
can count from 0000 to 1111 (0 to 15 in decimal).
Consider, initially, all flip-flops to be in the logical 0 state (i.e., QA = QB = QC
= QD = 0) in Figure 5.24(a). As clock pulse 1 arrives at the clock (CLK) input of
flip-flop A, it toggles (on the negative edge) and the display shows 0001. With the
arrival of the second clock pulse flip-flop A toggles again then QB goes from 1 to 0.
This causes flip-flop B to toggle to 1. The count on the display now reads 0010.
The counting continues, with each flip-flop output triggering the next flip-flop on its
negative going pulse. Before the arrival of sixteenth clock pulse all flip-flops are in
the logical 1 state and the display reads 1111. Clock pulse 16 causes QA, QB, QC,
QD to go to logical 0 state in turn.
Table 5.16 shows the sequence of binary states that the flip-flops will follow
as clock pulses are applied continuously. The counting mode of the mod-16 counter
is shown by waveforms in Figure 5.24(b). The clock input is shown on the top
line. The state of each flip-flop is shown on the waveforms. The binary count is
shown across the bottom of the diagram.
The delay between the responses of successive flip-flops is typically 5–20
numbers.
[1]
+ VCC
J QA J QB J QC J QD

A B C C
Clock
K QA K QB K QC K QD

QA QB QC QD

Output
(a) Logic Diagram
Clock
input 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

QA

QB

QC

QD
(b) Waveform Diagram
Self-Instructional Fig. 5.24 4-Bit Binary Ripple Counter
120 Material
Table 5.16 State Table of 4-Bit Binary Ripple Counter 9 Sequential Circuits

States
Number of
Clock Pulses QD QC QB QA
0 0 0 0 0 NOTES
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0
13 1 1 0 1
14 1 1 1 0
15 1 1 1 1
0 0 0 0 0

MOD–Number or Modulus
The MOD-number (or the modulus) of a counter is the total number of states
which the counter goes through in each complete cycle.
MOD number = 2N
Where N = Number of flip-flops.
The maximum binary counted by the counter is 2N – 1. Thus, a 4 flip-flop counter
can count as high as (1111) = 24 – 1 = 16 – 1 = 1510. The MOD number can be
increased by adding more FFs to the counter.
5.4.2 Synchronous Counter Operations
A synchronous, parallel, or clocked counter is one in which all stages are triggered
simultaneously.
When the carry has to propagate through a chain of n flip-flops, the overall
propagation delay time is ntpd. For this reason ripple counters are too slow for some
application. To get around the ripple-delay problem, can use a synchronous counter.
A 4-bit (MOD-16) synchronous counter, with parallel carry is shown in
Figure 5.25. The clock is connected directly to the CLK input of each flip-flop,
i.e., the clock pulses drive all flip-flops in parallel. In this counter only the LSB
flip-flop A has its J and K inputs connected permanently to VCC, i.e, at the high
level. The J, K inputs of the other flip-flops are driven by some combination of
flip-flop outputs. The J and K inputs of the flip-flop B are connected to QA output Self-Instructional
Material 121
Sequential Circuits of flip-flop. As the J and K inputs of the FF D are connected with AND operated
output of QA, and QB. Similarly, the J and K inputs of FF D are connected with
AND operated output of QA, QB, QC.
Clock-input
NOTES

JA QA JB QB JC QC JD QD

A B C D
KA QA KB QB KC QC KD QD

VCC (1)

Fig. 5.25 4-Stage Synchronous Counter

For this circuit to count properly, on a given negative transition of the clock, only
those FFs are supposed to toggle on the negative transition should have J = K =
1 when the negative transition occurs. According to the state Table 5.17, FF A is
required to change state with occurrence of each clock pulse. FF B changes its
state when QA=1. The flip-flop QC toggles only when QA = QB=1. And the flip-
flop QD changes state only when QA=QB=QC=1. In other words, a flip–flop toggles
on the next negative transition clock edge if all lower bits are 1s.
The counting action of counter is as follows:
1. The first negative clock edge sets QA to get Q = 0001.
2. Since QA is 1, FF is conditioned to toggle on the next negative clock edge.
3. When the second negative clock edge arrives, QB and QA simultaneously
toggle and the output word becomes Q = 0010. This process continues.
4. By adding more flip-flops and gate we can build synchronous counter of
any length. The advantage of synchronous counter is its speed, it takes only
one propagation delay time for the correct binary count to appear after the
clock edge hits.
Table 5.17 State Table of 4-bit Binary Ripple Counter
State QD QC QB QA

0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
Self-Instructional
122 Material
9 1 0 0 1 Sequential Circuits
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0
13 1 1 0 1 NOTES
14 1 1 1 0
15 1 1 1 1
0 0 0 0 0

Advantage of Synchronous Counters Over Asynchronous


In asynchronous counters, the propagation delays of the FFs add together to
produce the overall delay. In synchronous counters, the total response is the sum
of time it takes one FF to toggle and the time for new logic levels to propagate
through a single AND gate to the J, K inputs. The total delay is the same no matter
how many FFs are in the counter. Thus, a synchronous counter operates at a
much higher input frequency. But, the circuitry of the synchronous counter is more
complex than that of the asynchronous counter.
5.4.3 Design of Synchronous Counters
In this section, the general method to design the different types of synchronous
counters using various types of flip-flops is explained in detail.
The steps involved in the design of synchronous counter are listed below:
Step 1. From the word description of the problem, draw a state diagram which
describes the operation of the counter.
Step 2. From the above state diagram, obtain Present State – Next State (PS-NS)
table of the counter and check the same to ascertain whether it has any
equivalent states. Any two states are said to be equivalent, if and only if
their next states are one and the same. In such a case, one of the equivalent
states can be eliminated from the state table. Thus, in this step, the state
table is modified in such a way that there is no redundant state in it.
Step 3. Make a state assignment and document the same in the above state table.
Step 4. Decide the type of memory element to be used in the counter design and
then obtain the excitation table from PS–NS table using the application
table of the flip-flop.
Step 5. Draw the excitation maps for various excitation inputs of flip-flops and
simplify the excitation functions.
Step 6. Draw the schematic diagram of the counter.
To understand the above design procedures, the following counter design
problems can be considered, in which qi represents the present state and Qi
represents the next state of the flip-flop where i = 0 to n–1, and n is the number of
flip-flops used.

Self-Instructional
Material 123
Sequential Circuits Design of BCD or Decade (MOD-10) Counter
To design a BCD or Decade (MOD-10) counter that has ten states, i.e., 0 to 9
the number of flip-flops required is four. Let us assume that the MOD-10 counter
NOTES has ten states, viz. a, b, c, d, e, f, g, h, i and j.
Step 1. State diagram: Now the state diagram for the MOD-10 counter can be
drawn as shown in Figure. 5.26. Here, it is assumed that the state transition from
one state to another takes place when the clock pulse is asserted. When the clock
is unasserted, the counter remains in the present state.

Fig. 5.26 State Diagram of MOD-10 Counter

Step 2. State table: From the above state diagram, one can draw the PS-NS
table as shown in Table 5.18.
Table 5.18 PS-NS Table for MOD-10 Counter

Present State Next State


(PS) (NS)
a b
b c
c d
d e
e f
f g
g h
h i
i j
j a
The above state table does not have any redundant state because no two
states are equivalent. So, there is no modification required in the above state table.
Step 3. State assignment: Let us assign four state variables to these states a, b,
c, d, e, f, g, h, i and j as follows:
a = 0000, b = 0001, c = 0010, d = 0011, e = 0100, f = 0101, g = 0110, h =
0111, i = 1000 and j = 1001.
Then, the above PS–NS table can be modified as shown in Table 5.19.

Self-Instructional
124 Material
Table 5.19 PS–NS Table for MOD-10 Counter Sequential Circuits

Present State Next State


(PS) (NS)
q3 q2 q1 q0 Q3 Q2 Q1 Q0
NOTES
0 0 0 0 0 0 0 1
0 0 0 1 0 0 1 0
0 0 1 0 0 0 1 1
0 0 1 1 0 1 0 0
0 1 0 0 0 1 0 1
0 1 0 1 0 1 1 0
0 1 1 0 0 1 1 1
0 1 1 1 1 0 0 0
1 0 0 0 1 0 0 1
1 0 0 1 0 0 0 0
……………...... ......……………
1 0 1 0 d d d d
1 0 1 1 d d d d
1 1 0 0 d d d d
1 1 0 1 d d d d
1 1 1 0 d d d d
1 1 1 1 d d d d

Step 4. Excitation table: The excitation table having entries for flip-flop inputs
( J 3 K 3 ,J 2 K 2 ,J1 K1 and J 0 K 0 ) can be drawn, from the above PS–NS table using the
application table of JK flip-flop given earlier, as shown in Table 5.20.
Table 5.20 Excitation Table for MOD-10 Counter

PS NS Excitation Inputs
q 3 q 2 q1 q 0 Q 3 Q 2 Q1 Q0 J 3 K 3 J 2 K 2 J1 K 1 J0 K 0
0 0 0 0 0 0 0 1 0 d 0 d 0 d 1 d
0 0 0 1 0 0 1 0 0 d 0 d 1 d d 1
0 0 1 0 0 0 1 1 0 d 0 d d 0 1 d
0 0 1 1 0 1 0 0 0 d 1 d d 1 d 1
0 1 0 0 0 1 0 1 0 d d 0 0 d 1 d
0 1 0 1 0 1 1 0 0 d d 0 1 d d 1
0 1 1 0 0 1 1 1 0 d d 0 d 0 1 d
0 1 1 1 1 0 0 0 1 d d 1 d 1 d 1
1 0 0 0 1 0 0 1 d 0 0 d 0 d 1 d
1 0 0 1 0 0 0 0 d 1 0 d 0 d d 1
…………… …………… …………… ……………
1 0 1 0 d d d d d d d d d d d d
1 0 1 1 d d d d d d d d d d d d
1 1 0 0 d d d d d d d d d d d d
1 1 0 1 d d d d d d d d d d d d
1 1 1 0 d d d d d d d d d d d d
1 1 1 1 d d d d d d d d d d d d

Step 5. Excitation maps: The excitation maps for J 3 , K 3 , J 2 , K 2 , J1, K1 , J 0 and


K 0 inputs of the counter can be drawn as shown in Figure 5.27 from the Excitation
Table 5.20. Self-Instructional
Material 125
Sequential Circuits

NOTES

Fig. 5.27 Excitation Maps for MOD-10 Counter

Self-Instructional
126 Material
Step 6. Schematic diagram: Using the above excitation equations, the circuit Sequential Circuits

diagram for the MOD-10 counter can be drawn as shown in Figure 5.28.

NOTES

Fig. 5.28 Circuit Diagram for MOD-10 Synchronous Counter

Check Your Progress


4. Define registers.
5. What are memory and shift registers?
6. What do you understand by MOD-number of a counter?

5.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The latch with the additional control input is called the flip-flop.
2. Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types,
namely, S-R, J-K, D and T flip-flops.
3. The T-type flip-flop is obtained from a J-K flip-flop by connecting its J and
K inputs together.
4. A register is a group of flip-flops used to store or manipulate data or both.
Each flip-flop is capable of storing one bit of information.
5. A register stores a sequence of 0’s and l’s. Register that are used to store
information are known as memory registers. If they are used to process
information, they are called shift registers.
6. The MOD-number (or the modulus) of a counter is the total number of
states which the counter goes through in each complete cycle.

5.6 SUMMARY

 The latch with the additional control input is called the flip-flop. The additional
control input is either the clock or enable input.

Self-Instructional
Material 127
Sequential Circuits  Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types,
namely, S-R, J-K, D and T flip-flops.
 The D (delay) flip-flop has only one input called the Delay (D ) input and
NOTES
two outputs Q and Q’.
 A J-K flip-flop has a characteristic similar to that of an S-R flip-flop. In
addition, the indeterminate condition of the S-R flip-flop is permitted in it.
Inputs J and K behave like inputs S and R to set and reset the flip-flop,
respectively.
 T or Trigger or Toggle flip-flop, has only a single data (T) input, a clock
input and two outputs Q and Q’. The T-type flip-flop is obtained from a J-
K flip-flop by connecting its J and K inputs together.
 A Master–Slave flip-flop can be constructed using two J-K flip-flops. The
first flip-flop, called the Master, is driven by the positive edge of the clock
pulse; the second flip-flop, called the Slave, is driven by the negative edge
of the clock pulse.
 A register is a group of flip-flops used to store or manipulate data or both.
Each flip-flop is capable of storing one bit of information. An n-bit register
has n flip-flop and is capable of storing any binary information containing n-
bits.
 Register that are used to store information are known as memory registers.
If they are used to process information, they are called shift registers.
 A register which is capable of shifting data either left or right is called a
bidirectional shift register. A register that can shift in only one direction is
called a uni-directional shift register.
 The MOD-number (or the modulus) of a counter is the total number of
states which the counter goes through in each complete cycle.
 A synchronous, parallel, or clocked counter is one in which all stages are
triggered simultaneously.

5.7 KEY WORDS

 Flip-flop: It is the latch with the additional control input.


 Counter: It is a sequential circuit consisting a set of flip-flops connected in
a specific manner to count the sequence of the input pulses presented to it
in digital form.
 Clocked flip-flop: It is a flip-flop that has a clock input.
 Delay (D) flip-flop: It is the flip-flop wherein the transfer of data from the
input to the output is delayed.

Self-Instructional
128 Material
 Synchronous: It is the changes in the output occurred at a specified point Sequential Circuits

on a triggering input.

5.8 SELF ASSESSMENT QUESTIONS AND NOTES


EXERCISES

Short Answer Questions


1. Describe the operation of a D flip-flop.
2. What are the applications of T flip-flop?
3. List any three characteristics of flip-flop operations.
4. What is register/shift register?
5. Name the different types of registers.
Long Answer Questions
1. Explain with a logic diagram how a J–K master-slave FF is triggered.
2. Describe synchronous counter operations.
3. How does an S-R FF operate? Explain its action with a diagram.
4. Explain how an S-R FF can be converted into a D FF.
5. Explain how J-K FF can be converted into a D FF.
6. Write a procedure to construct any MOD-N counter.

5.9 FURTHER READINGS

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.

Self-Instructional
Material 129
Data Representation

UNIT 6 DATA REPRESENTATION


NOTES Structure
6.0 Introduction
6.1 Objectives
6.2 Data Types
6.3 Fixed Point Representation
6.4 Floating Point Reprsentation
6.5 Codes
6.5.1 Weighted Binary Codes
6.5.2 Non-weighted Binary Codes
6.6 Error Detection and Correction Codes
6.7 Answers to Check Your Progress Questions
6.8 Summary
6.9 Key Words
6.10 Self Assessment Questions and Exercises
6.11 Further Readings

6.0 INTRODUCTION

The computer stores all information in the form of binary numbers, i.e., all information
stored on a computer is written in machine language that computer understands.
This machine language uses binary numbers which comprise of only two symbols, 0
and 1. Thus, a bit (0 or 1) is the smallest unit of data in the binary system. You have
already studied the number system. In this unit, you will learn about the fixed and
floating point representation, various types of binary codes and how the error can be
detected and corrected after transmission through a channel.

6.1 OBJECTIVES

After going through this unit, you will be able to:


 Discuss the types of data
 Explain the fixed and floating point representation
 Understand the various types of codes
 Explain how to detect and correct errors in the transmitted data

6.2 DATA TYPES

Data is available in analog or in digital forms. Computer generates data in the


digital form which can be easily stored in a digital format but analog signals like
voice, video, etc., are difficult to store. They are sampled at regular intervals so
Self-Instructional
130 Material
that they may be converted into a digital form. The digital form offers the following Data Representation

advantages:
• Digital data is less affected by noise.
• Digital signal allows extra data to be carried over to provide a means for NOTES
detection and correction of errors.
• Processing of digital data is relatively easy. It can be performed in real-time
or non real-time.
• A single type of media can be used to store many different types of data like
video, speech, audio, etc. They may be stored on tape, hard-disk or CD-
ROM.
• A digital system provides more dependable response while an analog
system’s accuracy depends on parameters like component tolerance,
temperature, power supply variations, etc., and therefore, two analog
systems are never identical.
• Digital systems are considered more adaptable and can be reprogrammed
with software. Analog systems need different hardware for any functional
changes.
The disadvantages of digital conversion are:
• Data samples are quantised to given levels and introduce an error called
quantisation error. However, the quantisation error can be reduced by
increasing the number of bits used to represent each sample.
• The analog signal that is sampled at regular intervals to convert it into digital
signal will require large storage space. However, the data once stored tends
to be reliable and will not degrade over time.

6.3 FIXED POINT REPRESENTATION

Numbers in computers are typically represented using a fixed number of bits.


These sizes are typically 8 bits, 16 bits, 32 bits, 64 bits and 80 bits. These sizes
are generally a multiple of 8, as most computer memories are organized on an 8-
bit byte basis.
The numbers in which a specific number of bits are used to represent the
value are called fixed precision numbers. A specific number of bits is used to
represent a number, which determines the range of possible values that can be
represented. For example, there are 256 possible combinations of eight bits;
therefore, an 8-bit number can represent 256 distinct numeric values and the range
is typically considered to be 0–255. If a number is larger than 255, then it cannot
be represented using eight bits. Similarly, 16 bits allow a range of 0–65535.
When fixed precision numbers are used (as they are in virtually all computer
calculations), the concept of overflow must be considered.
Self-Instructional
Material 131
Data Representation An overflow occurs when the result of a cal-culation cannot be represented
with the number of bits available. For example, when adding the two 8-bit quantities,
150 + 170, the result is 320. This is outside the range 0–255, and so the result
cannot be represented using eight bits. The result has overflowed the available
NOTES range. When overflow occurs, the low-order bits of the result will remain valid,
but the high-order bits will be lost. This results in a value that is significantly smaller
than the correct result.
When doing fixed precision arithmetic (which all computer arithmetic involve),
it is necessary to be conscious of the possibility of overflow in the calculations.

6.4 FLOATING POINT REPRSENTATION

Floating-point number notation can be used conveniently to represent both large


and small fractional or mixed numbers. It makes the process of arithmetic operations
on these numbers relatively much easier. It greatly increases the range of numbers
from the smallest to the largest, that can be represented using a given number of
digits. Floating-point numbers are in general expressed in the form
N = m × be (1)
where m is the fractional part called the significant or mantissa, e is the
integer part called the exponent, and b is the base of the number system or
numeration. Fractional part m is a p-digit number of the form (±d.dddd … dd),
with each digit d being an integer between 0 and b - 1 inclusive. If the leading digit
of m is non-zero, then the number is said to be normalized.
Decimal system: N = m × 10e (2)
Hexadecimal system: N = m × 16e (3)
Binary system: N = m × 2 e
(4)
Floating-point numbers consist of two parts:
Mantissa—the part of floating-point number that represents the magnitude of the
number.
Exponent—the part of a floating-point number that represents the number of places
that the decimal point (or binary) is to be moved.

S Exponent Mantissa
(E) (fraction, F)
1bit 8 bits 23 bits

Formula to be used to calculate the floating-point numbers is as follows:


Number = (-1)s(1+F)(2E-127)

Self-Instructional
132 Material
For example, the decimal numbers 0.0003754 and 3754 are represented Data Representation

in floating-point notation as 3.754 × 10-4 and 3.754 × 103, respectively. Similarly,


a hex number 257. ABF will be represented as 2.57ABF × 162.
In case of normalized binary numbers, the leading digit, which is the MSB,
NOTES
is always ‘1’ and thus does not need to be stored explicitly. While expressing a
given mixed binary number as a floating-point number, the radix point is shifted in
such a manner so as to have the MSB immediately to the right of the radix point as
a ‘1.’ Both the mantissa and the exponent can have a positive or a negative value.
The mixed binary number (110.1011)2 = 0.1101011 × 23
= 0.1101011e + 0011.
Here, 0.1101011 is the mantissa and e + 0011 implies that the exponent is +3.
For example, (0.000111)2 will be written as 0.111e -0011, with 0.111 being
the mantissa and e-0011 implying an exponent of -3. Also, (-0.00000101)2 may
be written as -0.101 × 2-5 = -0.101e - 0101, where -0.101 is the mantissa and
e0101 indicates an exponent of -5.
If we want to represent the mantissas using eight bits, then 0.1101011 and
0.111 would be represented as 0.11010110 and 0.11100000, respectively.
Range of Numbers and Precision
The range of numbers that can be represented in a machine depends upon the
number of bits in the exponent, whereas the fractional accuracy or precision is
ultimately determined by the number of bits in the mantissa. The higher the number
of bits in the exponent, the larger is the range of numbers that can be represented.
For example, the range of numbers possible in a floating-point binary number
format using six bits to represent the magnitude of the exponent would be from 2-
64
to 2+64, which is equivalent to a range of 10-19 to 10+19.
The precision is determined by the number of bits used to represent the
mantissa. It is usually represented as decimal digits of precision. The concept of
precision as defined with respect to floating-point notation can be explained in
simple terms as follows. If the mantissa is stored in n number of bits, then it can
represent a decimal number between 0 and 2n-1 as the mantissa is stored as an
unsigned integer. (Refer Table 6.1 and Figure 6.1).
Table 6.1 Characteristic Parameters of IEEE Formats

Precision Sign Exponent Mantissa Total Decimal


(bits) (bits) (bits) Length Digits of
(bits) Precision
Single 1 8 23 32 >6
Double 1 11 52 64 >15

Self-Instructional
Material 133
Data Representation

NOTES
Figure 6.1 Single-Precision Formats

Example 6.1: Describe step-by-step transformation of (23)10 into an equivalent


floating-point number in single-precision IEEE format.
Solution:
 (23)10 = (10111)2 = 1.0111e + 0100
 The mantissa = 0111000 00000000 00000000
 The exponent = 00000100
 The biased exponent = 00000100 + 01111111 = 10000011
 The sign of the mantissa = 0
 (+23)10
= 01000001 10111000 00000000 00000000
 Also, (–23)10
= 1 1000001 10111000 00000000 00000000
Example: 6.2: Write (-5) in all number representation format.
Solution:
(-5) = In unsigned = not possible
(-5) = In signed form = 1101 (only sign change)
(-5) = 1’s form of 0101 (write positive number)
1010 (Take 1’s)
(-5) = in 2’s complement form 
0101 5
1010 5in1's
1
1011 5in 2's
In unsigned, number system ranges with n-bits [0-(2n-1)]. Refer Table 6.2.
For example, 4 bit (0000 - 1111)
0 2n 1
n
0 to 2 1

Self-Instructional
134 Material
Table 6.2 : Equivalent Values of Binary Numbers in Sign Magnitude, 1’s Complement, Data Representation
and 2’s Complement

Binary Sign Magnitude 1’s 2’s


0000 0 0 0 NOTES
0001 1 1 1
0010 2 2 2
0011 3 3 3
0100 4 4 4
0101 5 5 5
0110 6 6 6
0111 7 7 7
1000 0 7 8
1001 1 6 7
1010 2 5 6
1011 3 4 5
1100 4 3 4
1101 5 2 3
1110 6 1 2
1111 7 0 1

Sign Magnitude Representation


1. It is used for both positive and negative numbers.
2. To represent positive number, represent the number in binary with MSB
zero.
3. To represent negative number, write binary and then appendant (adding)
MSB with and magnitude remaining the same.
1’s Complement
In this complement representation, positive number is represented similar to sign
magnitude representation. To represent negative number:
1. Write positive number
2. Take 1’s complement
For example, +13  01101
01101( ve no. step1)
13
10010(1's complement form)

Self-Instructional
Material 135
Data Representation Example 6.3: Perform 5-4 using 1’s complement representation.
Solution:
5+[-4]
NOTES + 5  0101
4 0100
4
4 1011

5 0101
4 1011 In 1' s form
1000 Generate Carry
If carry is formed, add 1
0000
1
0001 1

2’s Complement
In this case, positive numbers are represented similar to 1’s complement or sign
magnitude representation. To represent negative number, the steps are as follows:
1. Write positive number
2. Take 1’s complement
3. Add 1
For example,
+13 = 01101
13 1101
10010 1' s complement
1
13 10011 2' s complement
Example 6.4: Perform (5-4) using 2’s complement representation.
Solution:
5  0101
–4  0100 = +4
1011  1’s complement
1100  2’s complement
0101  2’s complement of 5
1100  2’s complement of -4
10001  Generate carry
Self-Instructional
136 Material
Carry is discarded Data Representation

 0001 i.e. 1
Range for four bits [-8 to 7]
For n Bit: NOTES
-(2n-1) to (2n-1 -1)
-23 to 23 - 1
-8 to 7
Suppose, a number is represented in 2’s complement form as 1011, then its
equivalent decimal value is
1011  2’s complement form
1010  1’s complement form
-0101  Binary representation
–5  Decimal representation
Another Method
Take the 1’s complement and add 1 to that, it will give decimal equivalent
0100
1
0101
5
Similarly, a number is represented in 2’s complement form as 11011 its equivalent
decimal value is
000100
1
000101
5
Further, a number is represented in 2’s complement form as 111011 its
equivalent decimal value is
00100
1
00101
5
In 2’s complement representation to extent number of bit, copy MSB bit
line
1011 = -5
11011 = -5
111011 = -5
MSB is copied Self-Instructional
Material 137
Data Representation Example 6.5: Perform (5 + 4) using 2’s representation.
Solution:
+5 0101 (positive)
NOTES +4 0100 (positive)
1001 (negative)

Over flow
Overflow for Binary Numbers
When two positive numbers or two negative numbers are added, then the extended
range is known as overflow. In signed operation, overflow may occur when two
same sign numbers are added. Let X and Y are sign bit of two numbers. If Z is the
resultant sign bit, then the condition for overflow is
XYZ XYZ
x x
5 1101 1101
y y
4 1100 1100
1100 1101
z

XYZ XYZ
The second method to detect overflow is as follows:

0  No overflow
1  Overflow
where, Cin is the carry into MSB and Cout is carry from MSB
x
5 0101
y
4 0100
1001
z
5 10101 Extented Bit
y
+4 10100
01001
Note: If overflow occurs, then extend the number of bits by copying the MSB.

Check Your Progress


1. What are the two data types?
2. What are fixed precision numbers?
3. Write the general form of representing floating point representation.
Self-Instructional
138 Material
Data Representation
6.5 CODES

It is a symbolic representation of information transformation. Generally, 4-bit codes


are used. NOTES
Binary Codes
The binary system of representation is most extensively used in digital systems
including computers. The octal and hexadecimal number systems are commonly
used for representing groups of binary digits. The binary coding system called the
straight binary code becomes cumbersome to handle when it is used to represent
larger decimal numbers.
To overcome this shortcoming and also to perform many other special
functions, several binary codes have been evolved. Some of the better known
binary codes are used efficiently to represent numeric and alphanumeric data to
perform special functions such as detection and correction of errors.
If X = number of element of information to code into binary-coded format
and J = number of bits in a code, then following expression must hold
X  2J
or J  log2 X
J  3.32 log10 X
6.5.1 Weighted Binary Codes
Straight binary numbers are somewhat difficult for people to understand. Starting
at the MSB, the places of the code are weighted in decreasing powers of 2. Such
a code is known as 8, 4, 2, 1 weighted codes. These codes are useful because of
the compact manner in which they can be described.
1001 = 1 × 23 + 0 × 22 + 10 × 21 + 1 × 20 = 910
where,
n 1
N ai wi B
i 0

ai  code coefficient
Wi  weight
B  positive or negative bias
n  bits in the code
Example 6.6: Try to convert the binary number 10010110 to a decimal number.
Solution:
It turns out that 100101102 = 15010, but it takes quite a lot of time and effort to
make this conversion without a calculator.
Self-Instructional
Material 139
Data Representation Self-Complementary Codes: Certain codes have a distinct advantage in that
their logical complement is same as the arithmetic complement. The examples
include excess-3, 6311, 2421 codes.
Reflective Codes: The 9’s complement of reflected binary-coded decimal (BCD)
NOTES
code word is achieved simply by changing only one of its bits. A reflected code is
characterized by the fact that it emerges from the central point with one bit changed.
Unit Distance Code (UDC): There is one bit change in the next or adjacent
code word. It is independent of the direction of the code. The UDCs have special
advantage in that they minimize transition error or flashing.
Binary Coded Decimal (BCD): It makes conversion to decimals much easier.
Table 6.3 shows the 4-bit BCD code for the decimal digits 0–9. It should be
noted that the BCD code is a weighted code, that is, its weights are 8-4-2-1. In
BCD code, each decimal digit is represented with four bits. Refer Table 6.3.
Table 6.3 Binary-Coded Decimal Codes

In BCD code, these numbers (1010, 1011, 1100, 1110, 1111) are invalid digits
known as invalid BCD.
Following is an example of a valid BCD code:
(839)10  (1000 0011 1001)BCD
Note: If invalid state occurs in BCD, then we add 6, to get the correct result.
The MSB has a weight of 8 and the LSB has a weight of only 1. This code
is more precisely known as the 8421 BCD code.
The 8421 part of the name gives the weighting of each place in the 4-bit
code. There are several other BCD codes that have other weights for the four
Self-Instructional
140 Material
place values. Because the 8421 BCD code is most popular, it is customary to Data Representation

refer it simply as the BCD code. Refer Table 6.4.


Table 6.4 BCD Digits
Decimal BCD Code NOTES
0 ................ 0000
1 ................ 0001
2 ................ 0010
3 ................ 0011
4 ................ 0100
5 ................ 0101
6 ................ 0110
7 ................ 0111
8 ................ 1000
9 ................ 1001

6.5.2 Non-weighted Binary Codes


Some binary codes are non-weighted. Therefore, each bit has no special weighting.
Three such non-weighted codes are the excess-3 code, 2421 code and Gray
codes.
Excess-3 Code: The excess-3 code is another important BCD code. It is
particularly significant for arithmetic operations as it overcomes the shortcomings
encountered while using the 8421 BCD code to add two decimal digits whose
sum exceeds 9. The excess-3 code has no such limitation, and it considerably
simplifies arithmetic operations. Table 6.5 shows the excess-3 code for the decimal
numbers 0–9.
Table 6.5 Excess-3 Code Equivalent of Decimal Numbers

Decimal Number Excess-3 Code


0 0011
1 0100
2 0101
3 0110
4 0111
5 1000
6 1001
7 1010
8 1011
9 1100

The excess-3 code for a given decimal number is determined by adding ‘3’
to each decimal digit in the given number and then replacing each digit of the newly
found decimal number by its 4-bit binary equivalent. If the addition of ‘3’ to a digit
produces a carry, as is the case with the digits 7, 8 and 9, that carry should not be
taken forward. The result of addition should be taken as a single entity and
subsequently replaced with its excess-3 code equivalent. Self-Instructional
Material 141
Data Representation Excess-3 Code BCD Code + 0011 Missing: It is unweighted code. In this
code, first 3 and last 3 are invalid. Excess-3 is self-complementing code.
0 - (0011) 1’s complement (1100) - 9
NOTES 1 - (0100) 1’s complement (1011) - 8
2 - (0101) 1’s complement (1010) - 7
2421, 3321, 4221, 4311, 5211 in all code weightage is (Nine); so, it is self-
complementing weighted code.
Example 6.7: Let us find the excess-3 code for the decimal number 597.
Solution:
1. The addition of ‘3’ to each digit yields the three new digits/numbers ‘8,’
‘12’ and ’10.’
2. The corresponding 4-bit binary equivalents are 1000, 1100 and 1010,
respectively.
3. The excess-3 code for 597 is therefore given by: 1000 1100 1010 =
100011001010.
Also, it is normal practice to represent a given decimal digit or number using
the maximum number of digits that the digital system is capable of handling. For
example, in 4-digit decimal arithmetic, 5 and 37 would be written as 0005 and
0037, respectively. The corresponding 8421 BCD equivalents would be
0000000000000101 and 0000000000110111 and the excess-3 code equivalents
would be 0011001100111000 and 0011001101101010.
Decimal equivalent of excess-3 code can be determined by first splitting the
number into four-bit groups, starting from the radix point, and then subtracting
0011 from each 4-bit group. The new number is the 8421 BCD equivalent of the
given excess-3 code, which can subsequently be converted into the equivalent
decimal number.
The complement of the excess-3 code of a given decimal number yields the
excess-3 code for 9’s complement of the decimal number. As adding 9’s
complement of a decimal number B to a decimal number A achieves A – B, the
excess-3 code can be used effectively for both addition and subtraction of decimal
numbers.
Excess-3 code is also known as self-complementing code. Each decimal
digit is coded into a 4-digit binary code. The code for each decimal digit is obtained
by adding decimal 3 to the natural BCD.
Notes:
1. It is unweighted code.
2. It is self-complementary code.
2421 Code: Here, for 2-(0010)2 we can write (1000), but it is not written because
self-complement property is not followed. Refer Table 6.6.
Self-Instructional
142 Material
Table 6.6 Decimal 2421 Code Conversion Data Representation

NOTES

Gray Code: It is an unweighted binary code in which two successive values differ
only by one bit. The maximum error that can creep into a system using the binary
Gray code to encode data is much less than the worst-case error encountered in
the case of straight binary encoding, that is, minimum error code.
Table 6.7 Binary and Gray Code Equivalents of Decimal Numbers 0–15

Decimal Binary Gray


0 0000 0000
1 0001 0001
2 0010 0011
3 0011 0010
4 0100 0110
5 0101 0111
6 0110 0101
7 0111 0100
8 1000 1100
9 1001 1101
10 1010 1111
11 1011 1110
12 1100 1010
13 1101 1011
14 1110 1001
15 1111 1000

An examination of the 4-bit Gray code numbers as listed in Table 6.7 shows
that the last entry rolls over to the first entry. That is, the last and the first entry also
differ by only one bit. This is known as the cyclic property of the Gray code, that
is, cyclic permutation code.
A Gray code is a code assigned to each of a contiguous set of integers, or
to each member of a circular list—a word of symbols such that each two adjacent
code words differ by one symbol. These codes are also known as single-distance
codes reflecting the Hamming distance of 1 between adjacent codes. There can
Self-Instructional
Material 143
Data Representation be more than one Gray code for a given word length, but the term was first applied
to a particular binary code for the non-negative integers, the Binary-Reflected
Gray Code (BRGC), the 3-bit version of which is characterised as follows:
1. It is a unweighted code.
NOTES
2. The successive number differs by only one bit.
3. It is also known as UDC
4. The Gray code is known as cyclic code.
5. This is a minimum error code.
Example 6.8: Convert binary number 1011 into Gray code.
Solution:
 MSB Gray code = MSB Binary code
 From left to right, add each adjacent pair of binary code bits to get the next
Gray code bit and Discard carries
B3 B2 B1 B0
1 0 1 1

G3 G2 G1 G0
1 1 1 0

1011 B

1110 G

Example 6.9: Convert Gray number 1110 into binary code.


Solution:
 MSB binary code = MSB Gray code
 Add each binary code bit generated to the Gray code bit in the next adjacent
position. Discard carries.

In Gray code, a decimal number is represented in binary form in such a way


of that each gray-code number differs from the preceding and the succeeding
number by a single bit.
1. It has reflection property.

Self-Instructional
2. It is unweighted code.
144 Material
3. It is a Unit Distance Code (UDC). Data Representation

4. It is a more error free code compared to others.


5. It is used for input/output devices.
Decimal Binary Gray NOTES
0 0000 0000
1 0001 0001
2 0010 0011
3 0011 0010
4 0100 0110
5 0101 0111
6 0110 0101
7 0111 0100
8 1000 1100
9 1001 1101
10 1010 1111
11 1011 1110
12 1100 1010
13 1101 1011
14 1110 1001
15 1111 1000

Alphanumeric Codes
Alphanumeric codes are also called character codes. These are binary codes
which are used to represent alphanumeric data. The codes write alphanumeric
data including letters of the alphabet, numbers, mathematical symbols and
punctuation marks in a form that is understandable and processable by a computer.
These codes enable us to interface input–output devices such as keyboards, printers,
VDUs, etc., with the computer.
One of the better known alphanumeric codes during the early days of
evolution of computers when punched cards used to be the medium of inputting
and outputting data, is the 12-bit Hollerith code. The Hollerith code was used in
those days to encode alphanumeric data on punched cards.
Two widely used alphanumeric codes include the American Standard Code
for Information Interchange (ASCII) and the Extended Binary-Coded Decimal
Interchange Code (EBCDIC). While the former is popular with microcomputers
and is used nearly in all personal computers and workstations, the latter is mainly
used with larger systems.
American Standard Code for Information Interchange (ASCII Code)
American Standard Code for Information Interchange (ASCII), pronounced as
‘ask-ee’ is strictly a 7-bit code based on the English alphabet. ASCII codes are
used to represent alphanumeric data in computers, communications equipment Self-Instructional
Material 145
Data Representation and other related devices. As it is a 7-bit code, it can at the most represent 128
characters.
It currently defines 95 printable characters including 26 uppercase letters
(A–Z), 26 lowercase letters (a–z), 10 numerals (0–9) and 33 special characters
NOTES
including mathematical symbols, punctuation marks and space character. In addition,
it defines codes for 33 non-printing, mostly obsolete control characters that affect
how text is processed. The 8-bit version can represent a maximum of 256
characters.
When the ASCII code was introduced, numerous computers were dealing
with 8-bit groups (or bytes) as the smallest unit of information. The eighth bit was
commonly used as a parity bit for error detection on communication lines and
other device-specific functions. Machines that did not use the parity bit typically
set the eighth bit to ‘0.’
Some Important Facts about ASCII Code
1. The numeric digits, 0–9, are encoded in sequence starting at 30 H.
2. The upper case alphabetic characters are sequential beginning at 41
H.
3. The lower case alphabetic characters are sequential beginning at 61
H.
4. The first 32 characters (codes 0-1FH) and 7FH are control characters.
They do not have a standard symbol (glyph) associated with them.
They are used for carriage control and protocol purposes. They include
0Dh (CR or carriage return), 0Ah (LF or line feed), 0Ch (FF or form
feed), 08h (BS or backspace).
5. Most keyboards generate the control characters by holding down a
control key (CTRL) and simultaneously pressing an alphabetic
character key.
Advantage of ASCII: The 8-bit Standard International Organization for
Standardization (ISO) was developed as a true extension of ASCII leaving the
original character mapping intact in the process of inclusion of additional values.
This made possible representation of a broader range of languages.
Disadvantage of ASCII: In spite of the standard suffering from incompatibilities
and limitations, ISO-8859-1, its variant Windows-1252 and the original 7-bit
ASCII continue to be the most common character encoding in use today.
Extended Binary-Coded Decimal Interchange Code (EBCDIC)
Extended Binary-Coded Decimal Interchange Code (EBCDIC) pronounced as
‘eb-si-dik’ is another widely used alphanumeric code mainly popular with larger
systems. The code was created by IBM to extend the binary-coded decimal that
existed during those days. All IBM mainframe computer peripherals and operating
systems use EBCDIC code, and their operating systems provide ASCII and
Self-Instructional Unicode modes to allow translation between different encodings.
146 Material
It may be mentioned here that EBCDIC offers no technical advantage over Data Representation

the ASCII code and its variant ISO-8859 or Unicode. Its importance in the earlier
days lay in the fact that it made it relatively easier to enter data into larger machines
with punch cards. Since punch cards are not used on mainframes any more, the
code is used in contemporary mainframe machines solely for backwards NOTES
compatibility.
It is an 8-bit code and thus can accommodate up to 256 characters. A
single byte in EBCDIC is divided into two 4-bit groups called nibbles. The first 4-
bit group, called the ‘zone,’ represents the category of the character, while the
second group, called the ‘digit,’ identifies the specific character.
Unicode Encodings such as ASCII, EBCDIC and their variants do not have a
sufficient number of characters to be able to encode alphanumeric data of all
forms, scripts and languages. As a result, these encodings do not permit multilingual
computer processing. In addition, these encodings suffer from incompatibility. Two
different encodings may use the same number for two different characters or
different numbers for the same characters. Encodings such as ASCII, EBCDIC
and their variants do not have a sufficient number of characters to be able to
encode alphanumeric data of all forms, scripts and languages. As a result, these
encodings do not permit multilingual computer processing. In addition, these
encodings suffer from incompatibility. Two different encodings may use the same
number for two different characters or different numbers for the same characters.
For example, Code 4E (in hex) represents the upper-case letter ‘N’ in ASCII
code and the plus sign ‘+’ in the EBCDIC code.
It is the most complete character encoding scheme that allows text of all
forms and languages to be encoded for use by the computers. It not only enables
the users to handle practically any language and script but also supports a
comprehensive set of mathematical and technical symbols greatly simplifying any
scientific information exchange.
The Unicode standard has been adopted by industry leaders such as HP,
IBM, Microsoft, Apple, Oracle, Unisys, Sun, Sybase, SAP and many more.

6.6 ERROR DETECTION AND CORRECTION


CODES

In digital systems, the issue of error detection and correction is of great practical
significance. Errors creep into the bit stream owing to noise or other impairments
during the course of its transmission from the transmitter to the receiver. Any such
error, if not detected and subsequently corrected can be disastrous as digital systems
are sensitive to errors and tend to malfunction if the bit error rate is more than a
certain threshold level.
Error detection and correction involves the addition of extra bits known as
check bits to the information-carrying bit stream to give the resulting bit sequence
Self-Instructional
Material 147
Data Representation a unique characteristic that helps in detection and localization of errors. These
additional bits are also called redundant bits as they do not carry any information.
While the addition of redundant bits helps in achieving the goal of making
transmission of information from one place to another error free or reliable, and
NOTES makes inefficient.
When the digital information in the binary form is transmitted from one circuit
or system to another circuit, an error may occur. This means a signal corresponding
to ‘0’ may change to ‘1’ and vice versa.
Parity Code
A parity bit is an extra bit added to a string of data bits in order to detect any error
that might have crept into it while it was being stored or processed and moved
from one place to another in a digital system.
In an even parity, the added bit is such that the total number of ls in the
data bit string becomes even. In odd parity, the added bit makes the total number
of ls in the data bit string odd. This added bit could be a ‘0’ or a ‘1.’
The addition of a single parity cannot be used to detect two-bit errors,
which is a distinct possibility in data storage media such as magnetic tapes. The
single-bit parity code cannot be used to localize or identify the error bit even if one
bit is in error.
Block Parity Codes
If there is n rows and m columns of message bits, an odd parity is added to each
row and an even parity is added to each column. A final check is carried out at the
intersection of the column and row. This will show the location of the faulty bit
such as a bit in the 3rd column and 4th row, pij.
Repetition Code
The repetition code makes use of repetitive transmission of each data bit in the bit
stream. In the case of threefold repetition, ‘1’ and ‘0’ would be transmitted as
‘111’ and ‘000,’ respectively.
If in the received data bit stream bits are examined in groups of three bits,
the occurrence of an error can be detected. In the case of single-bit errors, ‘1’
would be received as 011 or 101 or 110 instead of 111, and a ‘0’ would be
received as 100 or 010 or 001 instead of 000. In both cases, the code becomes
self-correcting if the bit in the majority is taken as the correct bit.
There are various forms in which the data are sent using the repetition code.
Usually, the data bit stream is broken into blocks of bits, and then each block of
data is sent some predetermined number of times. For example, if we want to
send 8-bit data given by 11011001, it may be broken into two blocks of four bits
each. In the case of threefold repetition, the transmitted data bit stream would be
110111011101100110011001. However, such a repetition code where the bit or
Self-Instructional
148 Material
block of bits is repeated three times is not capable of correcting 2-bit errors although Data Representation

it can detect the occurrence of error.


Cyclic Redundancy Check (CRC) Code
Cyclic Redundancy Check (CRC) codes provide a reasonably high level of NOTES
protection at low redundancy level. The probability of error detection depends
upon the number of check bits, n, used to construct the cyclic code. It is 100% for
1-bit and 2-bit errors. It is also 100% when an odd number of bits are in error and
the error bursts have a length less than n + 1. The probability of detection reduces
to 1 – (1/2)n – 1 for an error burst length equal to n + 1, and to 1 - (1/2)n for an error
burst length greater than n + 1.
Hamming Code
In the case of the error detection and correction codes, an increase in the number
of redundant bits added to the message bits can enhance the capability of the code
to detect and correct errors. If we have a sufficient number of redundant bits, and
if these bits can be arranged such that different error bits produce different error
results, then it should be possible not only to detect the error bit but also to identify
its location.
In fact, the addition of redundant bits alters the ‘distance’ code parameter,
which has come to be known as the Hamming distance. The Hamming distance is
nothing but the number of bit disagreements between two code words. For example,
the addition of single-bit parity results in a code with a Hamming distance of at
least 2. The smallest Hamming distance in the case of a threefold repetition code
would be 3. Hamming noticed that an increase in distance enhanced the code’s
ability to detect and correct errors.
Hamming’s code was therefore an attempt at increasing the Hamming distance
and at the same time having as high an information throughput rate as possible. The
algorithm for writing the generalized Hamming code is as follows:
1. The generalized form of code is P1 P2 D1 P3 D2 D3 D4 P4 D5 D6 D7 D8
D9 D10 D11 P5 …, where P and D, respectively, represent parity and
data bits.
2. We can see from the generalized form of the code that all bit positions
that are powers of 2 (positions 1, 2, 4, 8, 16) are used as parity bits.
3. All other bit positions (positions 3, 5, 6, 7, 9, 10, 11) are used to
encode data.
4. Each parity bit is allotted a group of bits from the data bits in the code
word, and the value of the parity bit (0 or 1) is used to give it certain
parity.
5. Groups are formed by first checking all N bits and then alternately
skipping and checking N bits following the parity bit. Here, N is the
position of the parity bit; 1 for P1, 2 for P2, 4 for P3, 8 for P4 and so on.
Self-Instructional
Material 149
Data Representation For example, for the generalized form of code given above various groups
of bits formed with different parity bits would be P1D1D2D4D5, P2D1D3D4D6D7,
P3D2D3D4D8D9, P4D5D6D7D8D9D10D11 and so on. To illustrate the formation of
groups further, let us examine the group corresponding to parity bit P3. Now, the
NOTES position of P3 is at number 4. In order to form the group, we check the first three
bits (N - 1 = 3) and then follow it up by alternately skipping and checking four bits
(N = 4). Refer Table 6.8.
Now, these points can be summarized as follows:
 The Hamming code is capable of correcting single-bit errors on messages
of any length.
 The Hamming code can detect 2-bit errors and it cannot give the error
locations.
 The number of parity bits required to be transmitted along with the
message, however, depends upon the message length, as shown above.
 The number of parity bits n required to encode m message bits is the
smallest integer that satisfies the condition (2n – n) > m.
2P  d + P + 1
Where P  Number of parity bit
D  Number of data bit
Table 6.8 Generation of Hamming Code
P1 P2 D1 P3 D2 D3 D4
Data bits (without parity) 0 1 1 0
Data bits with parity bit P1 1 0 1 0
Data bits with parity bit P2 1 0 1 0
Data bits with parity bit P3 0 1 1 0
Data bits with parity 1 1 0 0 1 1 0

The most commonly used Hamming code is the one that has a code word
length of seven bits with four message bits and three parity bits. It is also referred
to as the Hamming (7, 4) code. The code word sequence for this code is written
as P1P2D1P3D2D3D4, with P1, P2 and P3 being the parity bits and D1, D2, D3 and
D4 being the data bits.
The step-by-step process of writing the Hamming code for a certain group
of message bits and then the process of detection and identification of error bits is
given in the following example.
Example 6.10: Write the Hamming code for the 4-bit message 0110 representing
numeral ‘6.’
Solution:
The process of writing the code is illustrated in Table 1.10 with even parity. Thus,
Self-Instructional the Hamming code for 0110 is 1100110.
150 Material
Let us assume that the data bit D1 gets corrupted in the transmission channel. Data Representation

The received code in that case is 1110110. In order to detect the error, the parity
is checked for the three parity relations mentioned above. During the parity check
operation at the receiving end, three additional bits X, Y and Z are generated by
checking the parity status of P1D1D2D4, P2D1D3D4 and P3D2D3D4, respectively. NOTES
These bits are a ‘0’ if the parity status is okay, and a ‘1’ if it is disturbed. In
that case, ZYX gives the position of the bit that needs correction. The process can
be best explained with the help of an example.
The examination of the first parity relation gives X = 1 as the even parity is
disturbed. The second parity relation yields Y = 1 as the even parity is disturbed
here too. The examination of the third relation gives Z = 0 as the even parity is
maintained. Thus, the bit that is in error is positioned at 011 which is the binary
equivalent of ‘3.’
This implies that the third bit from the MSB needs to be corrected. After
correcting the third bit, the received message becomes 1100110 which is the
correct code.

Check Your Progress


4. What are the two types of binary codes?
5. What are character codes?
6. Define parity bit.

6.7 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Data is available in analog or in digital forms.


2. The numbers in which a specific number of bits are used to represent the
value are called fixed precision numbers.
3. Floating-point numbers are in general expressed in the form
N = m × be
Where m is the fractional part called the significant or mantissa, e is the
integer part called the exponent, and b is the base of the number system or
numeration.
4. Weighted and non-weighted are the two types of binary codes.
5. Alphanumeric codes are also called character codes. These are binary
codes which are used to represent alphanumeric data. The codes write
alphanumeric data including letters of the alphabet, numbers, mathematical
symbols and punctuation marks in a form that is understandable and
processable by a computer.
Self-Instructional
Material 151
Data Representation 6. A parity bit is an extra bit added to a string of data bits in order to detect
any error that might have crept into it while it was being stored or processed
and moved from one place to another in a digital system.

NOTES
6.8 SUMMARY

 Data is available in analog or in digital forms. Computer generates data in


the digital form which can be easily stored in a digital format but analog
signals like voice, video, etc., are difficult to store.
 Numbers in computers are typically represented using a fixed number of
bits. These sizes are typically 8 bits, 16 bits, 32 bits, 64 bits and 80 bits.
These sizes are generally a multiple of 8, as most computer memories are
organized on an 8-bit byte basis.
 An overflow occurs when the result of a cal-culation cannot be represented
with the number of bits available.
 Floating-point number notation can be used conveniently to represent both
large and small fractional or mixed numbers.
 Mantissa is the part of floating-point number that represents the magnitude
of the number.
 Exponent is the part of a floating-point number that represents the number
of places that the decimal point (or binary) is to be moved.
 Straight binary numbers are somewhat difficult for people to understand.
Starting at the MSB, the places of the code are weighted in decreasing
powers of 2. Such a code is known as 8, 4, 2, 1 weighted codes.
 The excess-3 code is another important BCD code. It is particularly
significant for arithmetic operations as it overcomes the shortcomings
encountered while using the 8421 BCD code to add two decimal digits
whose sum exceeds 9.
 Gray Code is an unweighted binary code in which two successive values
differ only by one bit.
 Error detection and correction involves the addition of extra bits known as
check bits to the information-carrying bit stream to give the resulting bit
sequence a unique characteristic that helps in detection and localization of
errors.
 A parity bit is an extra bit added to a string of data bits in order to detect
any error that might have crept into it while it was being stored or processed
and moved from one place to another in a digital system.
 The Hamming distance is nothing but the number of bit disagreements
between two code words. For example, the addition of single-bit parity
results in a code with a Hamming distance of at least 2.
Self-Instructional
152 Material
Data Representation
6.9 KEY WORDS

 Binary-coded decimal: It is an encoding for decimal numbers in which


each digit is represented by its own binary sequence. NOTES
 Extended ASCII: It refers to a coding scheme that includes extra 128
codes in addition to the standard 128 ASCII codes.
 Underflow: It refers to the problem encountered in the case of floating
numbers when a negative exponent has a large value and cannot be adjusted
in the bits allotted to it.

6.10 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What is an error-detecting code?
2. Where is the parity bit present in a code word?
3. What is a Hamming distance?
4. Mention the application of the Hamming code.
5. Write (+5) in all number representation format.
Long Answer Questions
1. Write -13 in 1’s complement form.
2. Perform 4-5 by 1’s complement representation.
3. What is a Hamming code? Explain the Hamming code method of detecting
and correcting errors in code words.
4. Explain in detail the method of detecting errors in an n-bit code.

6.11 FURTHER READINGS

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
Material 153
Instruction Codes
BLOCK III
BASIC COMPUTER ORGANIZATION AND DESIGN

NOTES
UNIT 7 INSTRUCTION CODES
Structure
7.0 Introduction
7.1 Objectives
7.2 Instruction Codes
7.2.1 Instruction Formats
7.2.2 Instruction Types
7.3 Computer Registers
7.4 Computer Instructions
7.4.1 Timing and Control
7.5 Answers to Check Your Progress Questions
7.6 Summary
7.7 Key Words
7.8 Self Assessment Questions and Exercises
7.9 Further Readings

7.0 INTRODUCTION

Computers have become inevitable in our lives today. It is essential to know their
usage for all aspects of life and work. Even though there might be certain differences
between one computer and another, the basic organization remains the same. The
hardware used and the codes used for inserting information into the computer may
differ superficially but they are similar in the actions they perform. In this unit, you
will learn about the instruction codes, computer registers, computer instructions.

7.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the instructions and instruction codes
 Learn about computer instructions
 Know about the timing and control unit
 Comprehend the instruction cycle

7.2 INSTRUCTION CODES

A group of bits forms an instruction code that commands the computer to carry
out an operation. The operation part is the most fundamental part of an instruction
Self-Instructional
154 Material
code. It specifies the operation to be performed. An instruction code needs to Instruction Codes

define the operation, the registers or the memory where the operands are located,
and registers or memory word where the result should be stored.
The operands may come from memory, from registers or from the instruction
NOTES
itself.
Computer hardware understands the language of only 1s and 0s, so
instructions are encoded as binary numbers in a format called machine language.
7.2.1 Instruction Formats
The instructions come in only three formats: register (R), immediate (I) and jump
(J), as shown in Figure 7.1.
op rs rt rd sh fn
31 25 20 15 10 5 0
R 6 bits 5 bits 5 bits 5 bits 5 bits 6 bits
Opcod e Source Source Destination Shift Opcode
register 1 register 2 register amount extension

op rs rt operand / offset
31 25 20 15 0
I 6 bits 5 bits 5 bits 16 bits
Opcode Source Destination Imm ediate operand
or base or data or ad dress offset

op jump target address


31 25 0
J 6 bits 1 0 0 0 0 0 0 0 0 0 0 0 260 bits
0 0 0 0 0 0 0 1 1 1 1 0 1
Opcod e Memory word address (byte address divided by 4)

Fig. 7.1 Instruction Format

7.2.2 Instruction Types


An instruction format is basically a template which can be used to encode a class
of instructions. The basic computer has three instruction code formats. The formats
are as follows:
 R-type
 I-type
 J-type
Each format has 16 bits. The operation code part of the instruction contains
3 bits and the remaining 13 bits depend on the operation code encountered.
R-type instructions operate on three registers. I-type instructions operate
on two registers and a 16-bit immediate. J-type (jump) instructions operate on
one 26-bit immediate.
 R-type instructions: The name R-type is short for register-type. R-type
instructions use three registers as operands: two as sources and one as a
Self-Instructional
Material 155
Instruction Codes destination. All R-type instructions have an opcode of 0. The specific R-
type operation is determined by the funct field.
 I-type instructions: The name I-type is short for immediate-type. I-type
instructions use two register operands and one immediate operand. The
NOTES
32-bit instruction has four fields: op, rs, rt, and imm. The first three
fields, op, rs, and rt, are like those of R-type instructions.
 J-type instructions: The name J-type is short for jump-type. This format
is used only with jump instructions. This instruction format uses a single 26-
bit address operand.

7.3 COMPUTER REGISTERS


Instruction execution is the basic task carried out by the CPU. The execution of
each instruction is done using a number of small operations called microoperations.
Therefore, the basic issues concerning a CPU can be expressed as:
 The speed should be fast (as much as possible).
 The capacity of the main memory required by the CPU is very large.
For further understanding, let us begin with the definitions of relevant terms:
 Cycle time of the CPU: Time taken by the CPU to execute a well-defined
shortest microoperation.
 Memory cycle time: Speed at which the memory can be accessed by the
CPU.
It has been established that the memory cycle time is approximately 1–10 times
higher than the CPU cycle time. This is the reason why temporary storage is made
available within the CPU in the form of CPU registers. The CPU registers, referred
to as fast memory, can be accessed almost instantaneously.
Further, the number of bits a register can store at a time is called the length of
the register. Most CPUs sold today have 32-bit or 64-bit registers. The size of the
register is also called the word size and indicates the amount of data that a CPU
can process at a time. Thus, the bigger the word size, the faster the computer can
process data.
The number of registers varies among computers, but typical registers found in
most computers are:
 Memory Address Register (MAR): Specifies the address of memory
location from which data is to be accessed (in case of read operation) or to
which data is to be stored (in case of write operation).
 Memory Buffer Register (MBR): Receives data from the memory (in
case of read operation) or contains the data to be written in the memory (in
case of write operation).
 Program Counter (PC): Keeps track of the instruction that is to be
Self-Instructional
executed next, after the execution of the current instruction.
156 Material
 Accumulator (AC): Interacts with the ALU and stores the input or output Instruction Codes

operand. This register, therefore, holds the initial data to be operated upon,
the intermediate results and final results of processing operations.
 Instruction Register (IR): Instructions are loaded in the IR before their
NOTES
execution, i.e., the instruction register holds the current instruction that is
being executed.
A two-step process can be used to define the simplest form of instruction
processing:
1. The CPU reads or fetches instructions or codes from the memory one
at a time.
2. It executes the operation specified by this instruction.
The instruction fetching is done utilizing the program counter (PC). The
tracking of the subsequent instruction that is to be fetched is done by the PC. The
subsequent instruction in the sequence is normally fetched, as the programs
execution is done in sequence. The fetched instruction is in the form of binary code
and is loaded into an instruction register (IR), in the CPU. The CPU then interprets
the instruction and executes the required action. The division of these actions can
be done into following categories:
 Data transfer from CPU to memory or memory to CPU or from CPU to I/
O or I/O to CPU.
 Data processing an arithmetic or logic operation may be performed on the
data by the CPU.
 Sequence Control: This action is required to alter the sequence of
execution. For example, if an instruction from location 50 specifies that the
subsequent instruction to be fetched should be from location 100, and then
the program counter will need to be modified to contain the location 100
(which otherwise would have contained 51).

7.4 COMPUTER INSTRUCTIONS

The primary function of the processing unit in the computer is to interpret the
instructions given in a program and carry out the instructions. Processors are
designed to interpret a specified number of instruction codes. Each instruction
code is a string of binary digits. All processors have input/output, arithmetic, logic,
branch instructions and instructions to manipulate characters. The number and
type of instructions differ from one processor to another. The list of specific
instructions supported by the central processing unit (CPU) is termed as its
instruction set. An instruction in the computer should specify the following:
 The task or operation to be carried out by the processor, termed as the
opcode.

Self-Instructional
Material 157
Instruction Codes  The address(es) in memory of the operand(s) on which the data processing
is to be performed.
 The address in the memory that may store the results of the data-processing
operation performed by the instruction.
NOTES
 The address in the memory for the next instruction, to be fetched and
executed. The next instruction which is executed is normally the next
instruction following the current instruction in the memory. Therefore, no
explicit reference to the next instruction is provided.
Instruction Representation
An instruction is divided into a number of fields and is represented as a sequence
of bits. Each of the fields constitutes an element of the instruction. A layout of an
instruction is termed as the instruction format (Figure 7.2).

Opcode Operand Address

4 bits 12 bits

Fig. 7.2 A Sample Instruction Format

In most instruction sets, many instruction formats are used (Table 7.1). An
instruction is first read into an instruction register (IR), then the CPU, which extracts
and processes the required operands on the basis of references made on the
instruction fields, and then decodes it. Since the binary representation of the
instruction is difficult to comprehend, it is seldom used for representation. Instead,
a symbolic representation is used.
Table 7.1 Examples of Typical Instructions

Interpretation Number of
Instruction
Addresses
ADD A,B,C Operation A = B + C is executed 3
ADD A,B A = A + B. In this case the original
2
content of operand location is lost
ADD A AC = AC + A. Here A is added to the
accumulator 1

Typically, CPUs manufactured by different manufacturers have different


instruction sets. This is why machine-language programs developed for a particular
CPU do not run on a computer with a different CPU (having a different instruction
set).
7.4.1 Timing and Control
The timing and control unit generates timing and control signals (Figure 7.3). It is
necessary for the execution of instructions which provides status, timing and control
signals. This unit is necessary for the other parts of the CPU. It acts as the brain
of the computer which controls other peripherals and interfaces. It consists of the
Self-Instructional
158 Material
program counter (PC) which is used for addressing the program. It contains the Instruction Codes

eight-level hardware stack for PC storage during subroutine calls and input/output
interrupt services.
Arbitrary Waveform Signal
Generator Generator NOTES
I
RF Power RF TX/RX
Amp. Duplexer
Q
Antenna

Control Computer
RX
Front end

Receiver stage
GPIB Bus

IF

IF
Processor
10MHz REF Acquisition Computer
Timing and
SCLK
Control IRX QRX
PRF

Fig. 7.3 Air Radar-Attached Timing and Control Unit

Figure 7.3 shows how the clock pulses and control signals are collectively
generated by both the units for the required operation of the radar system, radar
sample clock and the pulse repetition frequency.
The memory control unit (Figure 7.4) works as an interface between the
processor and all the on-chip or off-chip memories. Timing is based on the system
clock which is either an on-board oscillator or an external clock. In either case,
the maximum clock frequency is 50 MHz (megahertz) when using 32-bit TSR
(terminate and stay resident) and 44 MHz when using 64-bit TSR:
Instruction Register Negative
Op-Code Flsg

Ring counter

T0 T1 T2 T3 T4 T5

LDA
Instruction
STA Control
Decoder
ADD Matrix
SUB
MBA
JMP
JN

HLT Control Signals

Fig. 7.4 Block Diagram of Control Unit


Self-Instructional
Material 159
Instruction Codes The following steps are performed by the control unit to fetch and execute
instructions:
 It reads the address of the memory location where it lies.
NOTES  It reads the instruction from the memory.
 It sends instructions to the decoding circuit for decoding.
 It addresses the data which is required for executing and reading from
the memory.
 It then sends the result to the memory or keeps in the same register to
wait its chance in queue.
 It takes help for the program counter to fetch the next instructions.
The functions of the control unit are as follows:
 It controls the entire operation of the computer.
 It also controls all other devices connected to the CPU.
 It fetches instructions from the memory and then decodes the instruction.
After interpreting the instructions, it knows what tasks are to be
performed. The last step is to send suitable control signals to other
components.
 It executes all other necessary steps to run instructions successfully.
 It maintains the set of instructions and directs the operation of the entire
system.
 It controls the data flow between the CPU and the main memory.
 It fetches the instructions from the memory, one after another, for the
execution unit where all the instructions are run and executed.

Check Your Progress


1. What are three instruction code formats?
2. What is the function of the timing and control unit?

7.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The basic computer has three instruction code formats. The formats are as
follows:
 R-type
 I-type
 J-type

Self-Instructional
160 Material
2. The timing and control unit generates timing and control signals and is Instruction Codes

necessary for the other parts of the CPU. It acts as the brain of the computer
which controls other peripherals and interfaces.

NOTES
7.6 SUMMARY

 A group of bits forms an instruction code that commands the computer to


carry out an operation. The operation part is the most fundamental part of
an instruction code. It specifies the operation to be performed.
 Instruction execution is the basic task carried out by the CPU. The execution
of each instruction is done using a number of small operations called
microoperations.
 Memory Address Register (MAR) specifies the address of memory location
from which data is to be accessed (in case of read operation) or to which
data is to be stored (in case of write operation).
 Memory Buffer Register (MBR) receives data from the memory (in case of
read operation) or contains the data to be written in the memory (in case of
write operation).
 The primary function of the processing unit in the computer is to interpret
the instructions given in a program and carry out the instructions. Processors
are designed to interpret a specified number of instruction codes. Each
instruction code is a string of binary digits.
 An instruction is divided into a number of fields and is represented as a
sequence of bits. Each of the fields constitutes an element of the instruction.
A layout of an instruction is termed as the instruction format.
 The timing and control unit generates timing and control signals. It is necessary
for the execution of instructions which provides status, timing and control
signals.

7.7 KEY WORDS

 Instruction code: It is a group of bits that commands the computer to


carry out an operation.
 Instruction format: It is basically a template which can be used to encode
a class of instructions.
 Instructions: These are commands that are given in a computer language.

Self-Instructional
Material 161
Instruction Codes
7.8 SELF ASSESSMENT QUESTIONS AND
EXERCISES

NOTES Short Answer Questions


1. What are instruction codes?
2. What are the different types of computer registers?
Long Answer Questions
1. Discuss the various types of instruction code formats.
2. Explain the representation of computer instructions.
3. What are the steps performed by the control unit to fetch and execute
instructions?

7.9 FURTHER READINGS

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.

Self-Instructional
162 Material
Instruction Cycle

UNIT 8 INSTRUCTION CYCLE


Structure NOTES
8.0 Introduction
8.1 Objectives
8.2 Complete Computer Description
8.2.1 Basic Anatomy of a Computer
8.2.2 Basic Design and Components of a Computer
8.2.3 Data Representation within the Computer
8.3 Instruction Cycle
8.4 Memory Reference Instructions
8.4.1 Memory Reference Format
8.5 Input/Output and Interrupt
8.6 Answers to Check Your Progress Questions
8.7 Summary
8.8 Key Words
8.9 Self Assessment Questions and Exercises
8.10 Further Readings

8.0 INTRODUCTION

In this unit, you will learn about the design of basic computer, instruction cycle,
memory reference instructions and I/O interrupt. The instruction cycle (also known
as the fetch–decode–execute cycle or the fetch-execute cycle) is the basic
operational process of a computer system. It is the process by which a computer
retrieves a program instruction from its memory, determines what actions the
instruction describes, and then carries out those actions. This cycle is repeated
continuously by the central processing unit (CPU), from boot-up till the computer
has shut down.

8.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the basic design and components of a computer
 Discuss the memory reference instructions
 Discuss the significance of input/output and interrupt

8.2 COMPLETE COMPUTER DESCRIPTION

You know that computers can store huge amounts of data and are designed to
cater to the end user’s need for speed, accuracy, diligence, versatility and storage
capacity. Their characteristics are as follows:
Self-Instructional
Material 163
Instruction Cycle  Speed: The internal processes of computers operate at the speed of light.
This speed is checked only due to the programs controlling these processes
and the amount of data being processed. A computer can perform in a
minute what a human being may require a lifetime to perform. The speed of
NOTES computers is not referred to in terms of seconds or milliseconds. It is referred
to in terms of microseconds (10–6), nanoseconds (10–9) and picoseconds
(10–12).
 Accuracy: A computer is extremely accurate. Although there are chances
of errors, they occur mostly due to human error and not due to technological
drawbacks. Errors originate due to imprecise thinking by the programmer
or due to the input of erroneous data. They could also arise due to the poor
design of systems. Garbage in garbage out (GIGO ) is the term used to
refer to computer errors resulting from incorrect data input or due to lack
of reliability of programs.
 Diligence: Unlike human beings, computers are capable of working for
long hours without breaks. A computer can perform a million calculations
with accuracy and speed. The speed or level of accuracy will be consistent
and will not deteriorate till the last calculation.
 Versatility: Computers can perform any task, as long as it can be broken
down to a series of logical steps. For example, a task such as preparing a
payroll can be reduced to a few logical tasks or operations performed in a
logical sequence. This breaking down of a process into steps facilitates
computerized processing.
A computer does have its limitations also. It can perform only four basic
operations:
(i) It can exchange information with the outside world via input/output
(I/O) devices.
(ii) It can transfer data internally within the CPU.
(iii) It can perform basic arithmetic operations.
(iv) It can perform comparisons.
 No intelligence: A computer does not possess any intelligence of its own.
It needs to be told what it has to do and in what sequence.
 Information explosion: The speed with which computers can process
information in huge volumes, has resulted in information explosion or
generation of information on a large scale. Human beings have the ability to
sift through data or knowledge and choose to retain only the important
information and forget the irrelevant or unimportant stuff. There is clearly a
difference in the way computers store information and the way human beings
do. The secondary storage capacity of computers assists in storing and
recalling any amount of information. Therefore, it becomes possible to
retain information for as long as desired and recall it whenever needed.
Self-Instructional
164 Material
8.2.1 Basic Anatomy of a Computer Instruction Cycle

The size, shape, cost and performance of computers have changed over the years.
However, the basic logical structure remains the same (Figure 8.1). A computer
system has three essential parts: NOTES
 Input device
 CPU (consisting of the main memory, the arithmetic logic unit and the control
unit).
 Output device
In addition to these basic parts, computers also use secondary storage
devices (also called auxiliary storage or backing storage), used for storing data
and instructions on a long-term basis.

(Central Processing Unit)


Fig. 8.1 Schematic Representation of a Computer System
Inputting, storing, processing, outputting and controlling are the basic
operations that help convert raw data into relevant information.
1. Input units
Inputting is the process of entering data and instructions into the computer system.
Both program and data need to be in the computer system before any kind of operation
can be performed. Program refers to the set of instructions which the computer has
to carry out, and data is the information on which these instructions are to operate.
For example, if the task is to rearrange a list of telephone subscribers in alphabetical
order, the sequence of instructions that guide the computer through this operation is
the program, whilst the list of names to be sorted is the data.
The input unit performs the process of transferring data and instructions
from the external environment into the computer system. Instructions and data
enter the input unit depending upon the particular input device used (keyboard,
scanner, card reader, etc). Regardless of the form in which the input unit receives
Self-Instructional
Material 165
Instruction Cycle data, it converts the data and instructions into a form that is acceptable by the
computer (binary codes). It then supplies the converted data and instructions for
further processing to the computer system.

NOTES Main memory (primary storage)


Storing is the process of saving instructions and data so as to make them available
for future use, as and when required.
Instructions and data are stored in the primary storage before processing
and are transferred when there is need, to the arithmetic logic unit (ALU) where
the actual processing takes place. After completing the process, the final results
are again stored in the primary storage till they are released to an output device.
Also, any intermediate results generated by the ALU are temporarily transferred
back to the primary storage till there is a need for them. Thus, data and instructions
may move back and forth many times between the primary storage and the ALU
before the processing is completed. It may be worth remembering that no processing
is done in the primary storage.
Arithmetic logic unit
Processing refers to the performing of arithmetic or logical operations on data, to
convert them into useful information. Arithmetic operations include operations of
add, subtract, multiply, divide and logical operations are operations of comparison
like less than, equal to, greater than.
After the input unit transfers the information into the memory unit, the
information can then be further transferred to the ALU where comparisons or
calculations are done and results are sent back to the memory unit. Since all data
and instructions are represented in numeric form (bit patterns), ALUs are designed
to perform the following four basic arithmetic operations: multiply, divide, add,
subtract, and logical operations like less than, equal to, greater than.
Secondary storage
A computer has limited storage capacity. It becomes necessary to store large
volumes of data. Therefore, additional memory called secondary storage or auxiliary
memory is used in most computer systems.
Storage other than the primary storage is called secondary storage and it
enables permanent storage of programs and huge volumes of data belonging to
the users. Examples of such storage are hardware devices like magnetic tapes and
magnetic disks.
2. Output unit
Outputting is the process of providing the results to the user. These could be in
the form of visual display and /or printed reports.
Since computers work with binary codes, the results produced are also in
Self-Instructional binary form. The basic function of the output unit, therefore, is to convert these
166 Material
results into human-readable form before providing the output through various output Instruction Cycle

devices like terminals, printers, etc.


Control unit
Controlling refers to the process of directing the sequence and manner of NOTES
performance of all these operations. It is the function of the control unit to ensure
that according to the stored instructions, the right operation is done on the right
data at the right time. It is the control unit that obtains instructions from the program
stored in the main memory, interprets them and ensures that other units of the
system execute them in the desired order. In effect, the control unit is comparable
to the central nervous system in the human body.
3. Central processing unit
The control unit and arithmetic logic unit are together known as the central
processing unit. It is the brain of any computer system.
A CPU is the most important component of a digital computer that interprets
the instructions and processes the data contained in computer programs. The CPU
works as the brain of the computer and performs most of the calculations. It is also
referred to as the processor and is the most important component of a computer.
For large computers, a CPU may require one or more printed circuit boards (PCBs)
but in the case of PCs it comes in the form of a single chip called a microprocessor.
PCB is a board that contains the circuitry used to connect the components of a PC.
8.2.2 Basic Design and Components of a Computer
Personal computers are microcomputers that are commonly used for commercial
data processing, desktop publishing (DTP), engineering applications and so on.
Figure 8.2 shows a personal computer.

Fig. 8.2 Personal Computer

A personal computer comprises a hard disk drive (HDD), random access


memory (RAM), processor, a keyboard, a floppy disk drive (FDD), a mouse, a
Self-Instructional
Material 167
Instruction Cycle CD drive, a colour monitor and read only memory (ROM). The RAM, ROM,
microprocessor and other circuits are connected on the motherboard, which is a
single board as shown in Figure 8.3.
Networking
NOTES Microprocessor/CPU
ports

Monitor, Keyboard Expansion slots


and Mouse Ports

Data bus

Sound card

Video/graphics card Floppy disk


drive

ROM Hard disk drive

Motherboard
RAM

Fig. 8.3 Motherboard and CPU


Processor
The microprocessor control the control unit, memory unit (register) and arithmetic
logic unit (Figure 8.4). The processing speed of a computer depends on the
clockspeed of the system and is measured in megahertz (MHz).
The Intel Corporation’s Pentium processors are used in most personal
computers. Motorola, Cyrix and AMD (Advanced Micro Devices) are other
makers of processors which are also used in personal computers.

Fig. 8.4 A Microprocessor

8.2.3 Data Representation within the Computer


Information is managed inside the computer by electrical components, such as
transistors, integrated circuits, semiconductors and wires. These components
indicate only two states or conditions. Transistors may be conducting or non-
conducting; magnetic materials are either magnetized or non-magnetized in a
direction and a pulse or voltage is either present or not present. All information can
therefore be represented within the computer by the presence (ON) or absence
(OFF) of these various signals. Thus, all data to be stored and processed in
computers is transformed or coded as strings of two symbols, one symbol to
Self-Instructional
168 Material
represent each state. The two symbols normally used are 0 and 1. These are Instruction Cycle

known as bits, an abbreviation for binary digits.You will now learn about some
commonly used terms:
 Bit: A bit is the smallest element used by a computer. It holds one of the
NOTES
two possible values. (0—off and 1—on)
 A bit which is OFF is also considered to be FALSE or NOT SET; a bit
which is ON is also considered to be TRUE or SET. Since a single bit can
only store two values, there could possibly be only four unique combinations
namely,
00 01 10 11
Bits are therefore, combined together into larger units to hold a greater
range of values.
 Nibble: A nibble is a group of four bits. This gives a maximum number of
16 possible different values.
24 = 16 (2 to the power of the number of bits)
 Bytes: Bytes are a grouping of 8 bits (two nibbles) and are often used to
store characters. They can also be used to store numeric values.
28 = 256 (2 to the power of the number of bits)
 Word: Just like we express information in words, so do computers. A
computer ‘word’ is a group of bits, the length of which varies from machine
to machine but is normally pre-determined for each machine. The word
may be as long as 64 bits or as short as 8 bits.

8.3 INSTRUCTION CYCLE

When a computer is given an instruction in the machine language, it is fetched from


the memory by the CPU to execute. The instruction cycle (or fetch-and-execute
cycle) refers to the time period, during which one instruction is fetched and executed
by the CPU. An instruction cycle has the four following stages:
1. Fetch: In this step, an instruction is loaded from the memory into the CPU
registers. All the instructions must be fetched before they can be executed.
2. Decode: In this step, the control unit decodes the instructions.
3. Derive effective address of the instruction: If the instruction has an
indirect address, then the effective address of the instruction from memory
is read in this step.
4. Execute: In this step, the action represented by the instruction is performed.
Steps 1 and 2, taken together, are called the fetch cycle and they are the
same for each instruction. Steps 3 and 4 are called execute cycle and they change
with each instruction.
Self-Instructional
Material 169
Instruction Cycle
8.4 MEMORY REFERENCE INSTRUCTIONS

Memory reference instructions (MRI) are 32-bits long, with extra 16 bits. It comes
NOTES from the next successive memory allocation which follows the instruction itself.
The effective memory address is addressed by sign-extending the 16-bit
displacement to 32 bits. Then it adds to the given index register as follows:
ea = r[x] +sxt(disp)
Here ‘ea’ is a variable which contains r[x]. It refers to the program counter
which is indexed. The r[0] index shows the relative address which follows immediate
instructions. This allows easy reference to locate the current program text. All
memory reference instructions share the assembly language formats as follows:
 op code  Rsrc, Rx, disp dst
 opRsrc, label
The first point shows the op code such as Rx, which is one of R1 through R15
and the second is used for system addressing. The assembler automatically computes
display, which is the difference between the current location and addressed label.
Memory reference instructions are those instructions in which two machine
cycles are required. One cycle fetches the instructions and other fetches the data
and executes the instructions. Instructions are based on arithmetic calculations.
Memory reference instructions are used in multi-threaded parallel processor
architecture. These instructions fetch process that two consecutive instructions
are tested to determine if both are register-load instructions or register-save
instructions. If both instructions are register save or load instructions, then
corresponding addresses are tested.
8.4.1 Memory Reference Format
Memory reference instructions are arranged as per the protocols of memory
reference format of the input file in a simple ASCII sequence of integers between
the range 0 to 99 separated by spaces without formatted text and symbols. These
are pure sequences of space-separated integer numbers. For example, |7 4|.
Figure 8.5 shows how 7 4 15 12 … are arranged in the memory
reference format. Here, dst and disp are keywords, where dst represents
destination address and disp refers to the displayed memory space.
_______________ _______________
|_|_|_|_|_|_|_|_| |_|_|_|_|_|_|_|_|
|7 4|3 0| |15 12|11 8|
|1 1 1 1| dst | |0 | x |
_______________________________
|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|
|15 0|
| disp |

Fig. 8.5 Memory Reference Format


Self-Instructional
170 Material
Figure 8.6 shows the memory reference instructions. Instruction Cycle

Register-to- 0
Register
op dR sR
Memory 0
Reference NOTES
op dR sB Address

Indexed
op dR sX sB Address

Fig. 8.6 Memory Reference Instructions

The dR and sR fields give the destination register and source register for
an operation. It contains any value between 0 and 7. The sB field indicates the
base/address register and contains the value from 1 to 7. The sX field indicates the
arithmetic/index register and contains the value from 1 to 7. The first two bits of
seven op code are 00, 01 or 10. Instructions start with 11. The op code has two
parts in which the first part indicates the type of number and the second part
shows the operation performed according to the instruction. Table 8.1 (a) shows
the first part of the op code and Table 8.1 (b) shows the second part.
Table 8.1 (a) First Part of Op Code

Binary Representation Type of Number Bit Representation


000 Byte 8-bit integer
001 Halfword 16-bit integer
010 Integer 32-bit integer
011 Long 64-bit integer
1000 Medium 48- floating point
1001 Floating 32- floating point
1010 Double 64- floating point
1011 Quad 128- floating point

Table 8.1 (b) Second Part of OP code


Binary Representation Type of Number
0000 000 Swap
0001 001 Compare
0010 010 Load
0011 011 Store
0100 100 Add
0101 101 Subtract
0110 110 Multiply
0111 111 Divide
1000 Insert
1001 Unsigned Compare
1010 Unsigned Load
1011 XOR
1100 AND
1101 OR
1110 Multiply Extensively
1111 Divide Extensively
Self-Instructional
Material 171
Instruction Cycle In Figure 8.7, Memory Reference shows the memory location, Memory to
Memory shows source and destination operands, Large Scratchpad is the scalar
instruction format with sixty-four supplementary registers for source and one general
register for destination, whereas Aux Register memory Reference are scalar
NOTES instructions used for thirty-two based register. XOR, AND and OR perform the
basic logical operations.
Memory
00 3 0 0 I
Reference
op op dR sX sSX sB Address
Memory-to- 00 3 0 1I I
Memory op op dX dB sX sB Address

Address
Long 00 3 3 0 15
Scratchpad op op op3 dR sS
Register-to- 00 3 3 0 16
Scratchpad op op dS op3 sR
Scratchpad-to- 0 0 3 3 0 1 7
Scratchpad op op dS sS
Aux Register 000 3 1 0 I
Memory op
Reference dBR sX sSX sB Address

Fig. 8.7 Memory Reference Instructions from Register-to-Register Format

8.5 INPUT/OUTPUT AND INTERRUPT

Input/output interrupt is an external hardware event which causes the CPU to


interrupt the current instruction sequence. It follows an interrupt mechanism to call
the special interrupt service routine (ISR). Input/output interrupt services save all
the registers and flags. They also restore the registers and flags and then resume
the execution of the code they interrupted. Interrupt is essentially a procedure.
They can pause the execution of some program at any point between two
instructions when an interrupt occurs. If an interrupt occurs in the middle of the
execution of some instruction, the CPU follows that instruction before transferring
control to the interrupt service routine.
For example, if an interrupt occurs between the two executions and it does not
follow the instructions. The subroutine statement is as follows:
add (a,b);
 (Interrupts occur here)
mov(b,p);
Once interrupt occurs, its control transfers to the appropriate that handles
hardware event. When ISR task is completed, the interrupt return (IRET) instruction
is executed, the control returns back to the point of interruption and execution of
the original code. Then the control returns back to the point of interruption and
follows the MOV instruction.

Self-Instructional
172 Material
A device can be used for identification of an input/output interrupt when it Instruction Cycle

registers an input/output interrupt associated with a particular input/output channel.


The benefits of input/output Interrupts are as follows:
 It is an external analogy to exceptions. NOTES
 It allows response to unusual external events without an inline overhead
(polling).
 The processor initiates and performs all I/O operations.
 The interrupt can be produced by a device to a processor when it is
ready.
 The data is transferred into the memory through interrupt handler.
 The control returns to the program which is currently in use.
An interrupt is caused due to interruptions in the following:
 Any single device
 A device whose ID number is stored on the address bus
 Processor poll devices
The source of the interrupt is checked and determined by the interrupt handler by
verifying the associated hardware status registers. They are processed in the
following ways:
 Lower numbers get higher priority
 Interrupt latency becomes critical for some of the devices
 Scheduling or ordering has great impact on interrupt latency
 A non-preemptive priority system gets affected by interrupt and causes
delay in packet transmission
 The interrupt in any device or system cannot be re-interrupted.
 All the pending interrupts are sequentially processed in order of priority.
The following are the functioning characteristics of input and output interrupt:
 The processor organizes all the input/output operations for smooth
functioning.
 The device takes more than the normal time to perform input/output
operations.
 After completing the input/output operation, the device interrupts the
processor.
 The processor then responds to the interrupt and transfers the data to the
destination.
 The input/output operation is then thus successfully completed.

Self-Instructional
Material 173
Instruction Cycle The following screenshot shows how input/output interrupt hardware settings
can be used for Input/Output Ports and Interrupt Request. The values estimated
by Windows are not correct.

NOTES

The next screenshot shows that when the values estimated by Windows are
changed, then it works properly.

Check Your Progress


1. What is a nibble?
2. What are the functions of the two machine cycles necessary for memory
reference instructions?
3. List two benefit of input/output interrupts.
Self-Instructional
174 Material
Instruction Cycle
8.6 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS

1. Nibble is a group of four bits. NOTES


2. Memory reference instructions require two machine cycles in which one
fetches the instructions and other fetches the data and executes the
instructions.
3. Two benefits of input/output interrupts are as follows:
 It is an external analogy to exceptions.
 It allows response to unusual external events without an inline overhead
(polling).

8.7 SUMMARY

 The input unit performs the process of transferring data and instructions
from the external environment into the computer system.
 Processing refers to the performing of arithmetic or logical operations on
data, to convert them into useful information. Arithmetic operations include
operations of add, subtract, multiply, divide and logical operations are
operations of comparison like less than, equal to, greater than.
 The control unit and arithmetic logic unit are together known as the central
processing unit.
 Memory reference instructions are those instructions in which two machine
cycles are required. One cycle fetches the instructions and other fetches the
data and executes the instructions.
 Input/output interrupt is an external hardware event which causes the CPU
to interrupt the current instruction sequence. It follows an interrupt
mechanism to call the special interrupt service routine (ISR). Input/output
interrupt services save all the registers and flags. They also restore the
registers and flags and then resume the execution of the code they interrupted.

8.8 KEY WORDS

 Nibble: A nibble is a group of four bits.


 Inputting: It is the process of entering data and instructions into the computer
system.
 Outputting: It is the process of providing the results to the user. These
could be in the form of visual display and /or printed reports.

Self-Instructional
Material 175
Instruction Cycle
8.9 SELF ASSESSMENT QUESTIONS AND
EXERCISES

NOTES Short Answer Questions


1. What are memory reference instructions?
2. List the causes of input/output interruptions.
3. Write a short note on the functions of output/input interrupts.
Long Answer Questions
1. Discuss the characteristics of a computer system.
2. What are the various stages of an instruction cycle?
3. What is an I/O interrupt? Explain.

8.10 FURTHER READINGS

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.

Self-Instructional
176 Material
Introduction to CPU
BLOCK IV
CENTRAL PROCESSING UNIT

NOTES
UNIT 9 INTRODUCTION TO CPU
Structure
9.0 Introduction
9.1 Objectives
9.2 Organization of CPU Control Registers
9.2.1 Organization of Registers in Different Computers
9.2.2 Issues Related to Register Sets
9.3 Stack Organization
9.4 Answers to Check Your Progress Questions
9.5 Summary
9.6 Key Words
9.7 Self Assessment Questions and Exercises
9.8 Further Readings

9.0 INTRODUCTION

In this unit you will learn about the organisation of CPU control registers and stack
organisation. A processor contains the different types of registers that are used for
holding the information related to the execution of instruction. For example program
counter holds the address of next instruction to be fetched. A stack is a storage
device that stores information in a last-in-first-out (LIFO) fashion. Only two type
of operations are possible in a stack, namely push and pop operations. Push
places data onto the top of stack, while pop removes the topmost element from
the stack. These operations can be used explicitly for execution of a program.

9.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the organization of CPU control registers
 Discuss the issues related to register sets
 Explain the stack organisation in computer

Self-Instructional
Material 177
Introduction to CPU
9.2 ORGANIZATION OF CPU CONTROL
REGISTERS

NOTES The main components of the Central Processing Unit (CPU) are as follows:
 Control unit (CU): The basic role of CU is to decode/execute instructions.
It generates the control/timing signals that trigger the arithmetic operations
in ALU and also controls their execution.
 Arithmetic and logic unit (ALU): It is used for executing mathematical
operations, such as *, /, + and – ; logical operations , such as AND and
OR; and shift operations, such as rotation of data held in data registers.
 Clock: There is a simple clock, a pulse generator, that helps to synchronize
the CU operations so that the instructions are executed in proper time. A
processor’s speed is measured in hertz, which is the speed of the computer’s
internal clock. The higher the hertz number, the faster is the processor.
 Registers: A CPU consists of several operational registers used for storing
data that are required for the execution of instructions.
The design of CPU in its modern form was first proposed by John von
Neumann and his colleagues for the Institute for Advanced Studies (IAS) computer.
The IAS computer had a minimal number of registers along with the essential
circuits. This computer had a small set of instructions with each instruction having
two parts: opcode and operand. It was allowed to contain only one operand
address.
The simplest machine has one general-purpose register, called accumulator
(AC), which is used for storing the input or output operand for ALU. ALU directly
communicates with AC.

Address Bus M
A
R

PC

IR

Control Bus Control Unit

X A
ALU C
Y C
Data Bus

Self-Instructional Fig. 9.1 General Organization of a Computer


178 Material
A computer contains the following parts (Figure 9.1): Introduction to CPU

 Program counter (PC): PC contains the address of an instruction to be


fetched. It has 12 bits as it also holds a memory address (i.e., the address
of the next instruction). Programs are usually sequential in nature. The
NOTES
program counter is updated by the CPU after each instruction is fetched,
pointing to the next instruction to be executed. But a branch or skip instruction
will modify the contents of the PC to some other value.
 Instruction register (IR): The instruction fetched from memory is stored
in IR where the opcode and operand are analysed (operand can be data
itself or it can be the address of memory location which store data), and
accordingly, control signals are generated by the control unit for the execution
of instructions.
 Temporary register (TR): TR is used for storing the temporary data that
is calculated during processing.
 Accumulator (AC): It is a general-purpose register which interacts with
ALU and stores the results obtained from ALU. These results are transferred
to the input or output registers.
 Data register (DR): It acts as buffer storage between the main memory
and the CPU. It also stores the operand for the instructions, such as ADD
DR or AC > AC+DR. In other words, contents of AC and DR are added
by ALU and the results are stored in the accumulator. Thus, data register
can also store one of the input operands.
 Memory address register (MAR): It is used to provide address of
memory location from where data is to be retrieved or to which data is to
be stored. MAR has 12 bits as it stores the memory address which is of 12
bits in size.
AR and DR play an important role in the transfer of data between CPU
and the memory, i.e., they act as a buffer when the processor wishes to
copy information from a register to primary storage (or read information
from primary storage to a register). In the computer systems that use a
common bus system, AR is directly connected to address bus, while DR is
connected to data bus. DR is used for interchanging the data among several
other registers.
 Input register (INPR): This register is used for storing input received
from input device.
 Output register (OUTR): This register is used for storing output to be
transferred to output device.
The input register and output register only need to be 8 bits since they store
8-bit characters.

Self-Instructional
Material 179
Introduction to CPU 9.2.1 Organization of Registers in Different Computers
How the various components of control registers are connected to one another
and how they communicate data among themselves is shown in Figure9.2. From
NOTES a user’s point of view, the register set can be classified under the following two
basic categories: programmer-visible registers and status and control registers. A
brief description of the two categories has been given in the following lines.

Internal control signals


Program
counter Address
generation Control
Stack logic circuits
pointer

Program-control
unit
Address Instruction
register register
To main memory and IO devices

Control
System
bus Address
control Data

Data processing
Data unit Status
register (execution unit) register

General-purpose Arithmetic-logic
registers unit

Fig. 9.2 Register-Level CPU Organization

Programmer-visible registers
These registers can be used by machine or assembly language programmers to
hold all temporary data to minimize the reference to main memory. Virtually, all
CPU designs provide for a number of user-visible registers, unlike a single
accumulator, as proposed for IAS computer.
Programmer-visible registers can be accessed using machine language. The following
are the various types of programmer-visible registers.
(i) General-purpose register: The general-purpose registers are used for
various functions as required by the programmer. A true general-purpose
register can contain operand for any opcode address or can be used for the
calculation of address operand for any operation code of an instruction.
Self-Instructional
180 Material
But today’s trend favours machines having dedicated registers. For example, Introduction to CPU

some registers may be dedicated to floating point operations. In some cases,


general- purpose registers can be used for addressing functions (e.g., register
indirect, displacement, etc.). In other cases, there is a partial or clear
separation between data register and address register. NOTES
(ii) Data register: The data registers are used for storing intermediate results
or data. They cannot be used for the calculation of operand address.
(iii) Address register: An address register may be a general-purpose register,
but in some computers, the dedicated address registers are also used.
Examples of the dedicated address registers are as follows:
 Segment pointer: In a machine with segmented addressing, a segment
register holds the address of the base of the segment in the memory.
There may be multiple registers, e.g., one for the operating system
and one for the current process and they may be auto indexed.
 Index registers: These are used for index addressing scheme and
may be auto indexed.
 Stack pointer: When the programmer-visible stack addressing is used,
the stack is typically in the memory; and a dedicated register, called
stack pointer, is used which points to the top of the stack. This allows
implicit addressing, that is, push, pop and other stack instructions need
not contain an explicit stack operand.
One of the key operations where the programmer-usable register is used is
when a subroutine call is issued. On a subroutine call, all temporary data stored in
these registers are stored back in main memory by the call statement and are
restored on encountering a return statement from the subroutine. This operation is
automatic in most machines. Yet, in certain machines, this is done by the
programmers. Similarly, while writing an interrupt service routine, it is required to
save some or all programmer-usable registers. In this simple project, the use of a
stack pointer could be too excessive and complex to be realized. Hence, a stack
pointer is exclusively used for executing the subprograms.
Status and control registers
These registers cannot be used by the programmers. However, they are used by
the control unit to control the operation of the CPU and by the operating system
programs to control the execution of programs. The control registers hold information
used for the control of the various operations. These registers cannot be used in
data manipulations. However, the contents of some of these registers can be used
by the programmer. Most of them are not visible to the user. Only a few of them
may be visible, which are executed in a control or operating system mode. The
various control and status registers that are essential for the execution of instructions
are as follows:

Self-Instructional
Material 181
Introduction to CPU (i) Program counter (PC): PC is a register that holds the address of the next
instruction to be read from memory. The PC increments after each instruction
is executed and causes the computer to read the next instruction of program
which is stored sequentially in the main memory. In case of a branch
NOTES instruction, the address part is transferred to PC to become the address of
the next instruction. To read an instruction, the content of PC is taken as the
address for memory and a memory read cycle is initiated. PC is then
incremented by one. So, it holds the address of the next instruction in
sequence. Number of bits in the PC is equivalent to the width of a memory
address.
(ii) Instruction register (IR): IR is used to hold the opcode of instruction that
is most recently fetched from memory.
(iii) Status or flug register: Almost all the CPUs, as discussed earlier, have a
status register (also called flag register or processor status word), a part
of which may be programmer visible. A register which may be formed by
condition codes is called a condition code register. It stores the information
obtained from execution of the previous condition instructions it depends
on the test result of a conditional branch instruction the execution flow of
the program can be altered.
Some of the commonly used flags or condition codes stored in such a register
are:
 Sign flag: Sign bit will be set according to the sign of previous arithmetic
operation, whether it is positive (0) or negative (1).
 Zero flag: Flag bit will be set if the result of the last arithmetic operation
was zero.
 Carry flag: Carry bit will be set if there is a carry result from the addition
of the highest order bits or a borrow is taken from subtraction of the
highest order bit.
 Equal flag: This bit flag will be set if a logic comparison operation finds
out that both of its operands are equal.
 Overflow flag : This flag is used to indicate the condition of arithmetic
overflow.
 Interrupt enable/disable flag: This flag is used for enabling or disabling
interrupts.
 Supervision flag: This flag is used in certain computers to determine
whether the CPU is executing in supervisor mode or user mode. It is
important as certain privileged instructions can be executed only in
supervisor mode, and certain areas of memory can be accessed only in
supervisor mode.
In most CPUs, on encountering a subroutine call or interrupt handling routine,
it is desired that the status information, such as conditional codes and other register
Self-Instructional
182 Material
information, be stored so that it can be restored once that subroutine is over. The Introduction to CPU

register that stores condition code and other status information is known as program
status word (PSW). Along with PSW, a computer can have several other status
and control registers, such as interrupt vector register in the machines using vectored
interrupt, a stack pointer, if a stack is used to implement subroutine calls, etc. The NOTES
design of status and control register also depends on the operating system (OS)
support. Hence, it is always advisable to design register organization based on the
principles of operating system as there are some control information that are only
of specific use to the operating system and hence depends on the operating system
that we are using. In some machines, the processor itself coordinates the subroutine
call, which will result in the automatic saving of all user-visible registers and restoring
them back on return. This allows each subroutine to use the user-visible registers
independently. While in other machines, it is the responsibility of the programmer
to save the contents of the relevant user-visible registers prior to a subroutine call.
Thus, in the second case we must include instructions that can implement the
saving of the data in the program.
However, a clear separation of registers into these two categories does not
exist. For example, on some machines, the program counter is user-visible
(e.g.VAX ),while it is not so in case of other machines.
9.2.2 Issues Related to Register Sets
The operating system design is an important issue for designing the architecture of
the control and status register organization. As control information is specifically
used by the operating system, the CPU design is somewhat dependent on operating
system. While designing the set of registers, there are few more issues, such as:
Should one use general-purpose registers or dedicated registers in a
machine?
In case of specialized registers, the number of bits needed to specify a register is
reduced, as one has to specify only few registers out of a set of registers. With the
use of specialized registers, it can generally be implicit in the opcode which type of
register a certain operand specifier refers to. The operand specifier must only
identify one of a set of specialized registers rather than one out of all the registers,
thus saving bits. Similar data can be stored either in AC or DR, out of possible 8
registers in a basic computer, as discussed earlier. However, this specialization
does not allow much flexibility to the programmer. Although there is no best solution
to this problem, yet the latest trends favour the use of a specialized register.
How many registers should be used?
Another issue related to the register set design is the number of general-purpose
register to be used. The number of register affects the instruction set as it determines
the type of addressing mode. The number of register determines the number of
bits needed in an instruction to specify a register reference. In general, it has been
found that optimum number of register in a CPU is in the range of 8 to 31. More
Self-Instructional
Material 183
Introduction to CPU the numbers of registers used for storing the temporary results, the lesser will be
the memory references. As the number of memory references is decreased, the
speed of execution of a program increases. But it is observed that if the number of
registers goes above 31, then there is no appreciable reduction in memory
NOTES references. However, there are systems like Reduced Instruction Set Computers
(RISC) where hundreds of registers are used. Here, a very simple instruction set
architecture is used so that an overall high system is obtained.
What should be the length of the register?
Another important characteristic related to register is its size. Normally, the length
of a register depends on the purpose for which it is designed. For example, a
register that holds addresses like AR should be long enough to hold the largest
address. Similarly, the length of data register like DR and AC should be long
enough to hold the data type it is supposed to hold. In certain cases, two consecutive
registers may be used to hold data whose length is double the register length.
How should the control information be stored? Should it be accessible to
the programmer?
Status information requires only few bits to store. Condition code register that
may be partially visible to the programmers and holds condition codes of various
flag status are discussed earlier. These flags are set by the CPU as the results of
operations. For example, an arithmetic operation may produce a positive, negative,
zero or overflow result, e.g., on dividing a number by 0, the overflow flag can be
set. These codes may be tested by a program for the conditional branch operation.
The condition codes are collected in one or more registers. RISC machines have
several sets of conditional code bits. In these machines, an instruction specifies the
set of condition codes, which is to be used. Condition codes form a part of a
control register. Generally, machine instructions allow conditional code bits to be
read by implicit reference, but they cannot be altered by the programmer.
How should control information be allocated between registers and the
memory?
As it is not possible to store all control information in registers, memory has to be
used. It is common to dedicate the first few thousand words of memory for storing
control information. The designer must decide how much control information should
be in registers and how much in memory. There is always a trade off between the
cost and the speed.

9.3 STACK ORGANIZATION

The first decision to be taken in ISA design is the type of internal storage in the
CPU. The three major choices are: a stack, an accumulator and a register set.
Early computers used the stack and accumulator architectures. Let us study how
a stack organization works.
Self-Instructional
184 Material
A stack is a storage device that stores information in a last-in-first-out (LIFO) Introduction to CPU

fashion. Only two type of operations are possible in a stack, namely push and pop
operations. Push places data onto the top of stack, while pop removes the topmost
element from the stack. These operations can be used explicitly for execution of a
program and in some cases the operating system implements it implicitly, such as NOTES
in subroutine calls and interrupts, as discussed earlier. Some computers reserve a
separate memory for stack operations. However, most computers utilize main
memory for representing stacks. For accessing data stored in a stack, we need a
stack pointer (SP) register. The SP register is initially loaded with the address of
the top of the stack. Each instruction pops its source operands off the stack and
pushes its result on the stack. In memory, the stack is actually upside-down.
So, when something is pushed onto the stack, the stack pointer is
decremented.
SP  SP – 1
M[SP]  DR
When something is popped off the stack, the stack pointer is incremented.
DR  M[SP]
SP  SP + 1
While using stack architecture (Figure 9.3), one must ensure that an overflow
or underflow does not happen while performing stack operations as these
conditions lead to loss of information. Let us study the following example to
understand how stack organization implements the addition of two variables.
Example: To store the sum of memory variables A and B into location C of memory:
Push A: Copy A from memory and push it on the stack.
Push B: Copy B from memory and push it on the stack.
Add : Pop the top two stack items and push their sum on the stack.
Pop C: Pop the top of stack and store it in memory location C.
Stack uses Reverse Polish Notation (RPN) to solve arithmetic expression.
RPN is a way of representing arithmetic expressions. It avoids the use of brackets
to define priorities for evaluation of operators. In RPN scheme, the numbers and
operators are listed one after another. The architecture of a stack can be thought
as a pile of plates. The operations are performed by applying operator on the
most recent numbers, that is, on the top of the stack. An operator takes the
appropriate number of arguments from the top of the stack and replaces them
with the results of the operation. In ordinary notation, one might write
(8 + 9) * (5 – 2)
The brackets tell us that we have to add 8 and 9, subtract 2 from 5, and then
multiply the two results together. In this notation, the above expression would be:
89+52–*
Self-Instructional
Material 185
Introduction to CPU First, the given notation of the operation is converted into ‘reversed Polish notation
(RPN),’ a notion founded by Polish philosopher and mathematician Jan
Lukasiewicz:
(A+B)*(C+D) = AB + CD + *
NOTES
Then execute this program:
PUSH A
PUSH B
ADD
PUSH C
PUSH D
ADD
MUL
POP X

B C
A A A+B A+B
Push A Push B Add Push C

D
C C+D
A+B A+B (A + B) (C – D)
Push D Add Multiply Pop X

Push Pop

Stack Pointer Top

Bootom

Fig. 9.3 A Stack Architecture

Advantages of stack organization


 As only two types of operations are possible, push and pop, and only one
address, i.e., of stack pointer is needed in the instruction, all instructions
require only one or two bytes of code. Hence, high code density can be
obtained.

Self-Instructional
186 Material
 It has a simple architecture. It requires only one dedicated register (SP) and Introduction to CPU

some memory area in the RAM where stack is kept.


 Machine code uses RPN which is very easy to compile.
 No save/restore code is needed for procedure calls and returns. NOTES
 It is easy to implement the recursive call of any subroutine.
Disadvantages of stack organization
 Only operands at the top of the stack are accessible. Hence, if one has to
access an item inside the stack, he has to pop out all items above it.
 Several operations, such as swapping, require extra instructions if
implemented through stack organization.
 It uses zero addressing scheme.

Check Your Progress


1. What is the function of ALU?
2. What do you understand by data register?
3. Which flags are used in storing data in a register?
4. State any two advantages of stack organization.

9.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. ALU is used for executing mathematical operations, such as *, /, + and – ;


logical operations, such as AND and OR; and shift operations, such as
rotation of data held in data registers.
2. Data register acts as buffer storage between the main memory and the
CPU. It also stores the operand for the instructions, such as ADD DR or
AC > AC+DR.
3. The various flags used in storing data in a register are sign flag, zero flag,
carry flag, equal flag, overflow flag, supervision flag and interrupt/disable
flag.
4. The two advantages of stack organization are as follows:
(i) Machine code uses Reverse Polish Notation which is very easy to
compile.
(ii) No save/restore code is needed for procedure calls and returns.

Self-Instructional
Material 187
Introduction to CPU
9.5 SUMMARY

 The simplest machine has one general-purpose register, called accumulator


NOTES (AC), which is used for storing the input or output operand for ALU.
 The input register and output register only need to be 8 bits since they store
8-bit characters.
 One of the key operations where the programmer-usable register is used is
when a subroutine call is issued.
 The first decision to be taken in ISA design is the type of internal storage in
the CPU. The three major choices are: a stack, an accumulator and a register
set.
 While using stack architecture, one must ensure that an overflow or
underflow does not happen while performing stack operations as these
conditions lead to loss of information.

9.6 KEY WORDS

 Accumulator (AC): It is a general-purpose register which interacts with


ALU and stores the results obtained from ALU.
 Input register (INPR): This register is used for storing input received
from input device.
 Output register (OUTR): This register is used for storing output to be
transferred to output device.

9.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What are the main components of CPU?
2. What is program counter?
3. What is the function of an accumulator?
4. What are the various status and control registers?
Long Answer Questions
1. Explain the various types of CPU registers.
2. Discuss the organisation of registers in a system.
3. What are the different issues related to the register sets?
4. Explain the stack architecture.
Self-Instructional
188 Material
Introduction to CPU
9.8 FURTHER READINGS
Mano, M. Morris. 1992. Computer System Architecture. New Delhi: Prentice-
Hall. NOTES
Mano, M. Morris. 2000. Digital Logic and Computer Design. New Delhi:
Prentice-Hall.
Mano, M. Morris. 2002. Digital Design. New Delhi: Prentice-Hall.
Stallings, William. 2007. Computer Organisation and Architecture. New Delhi:
Prentice-Hall.

Self-Instructional
Material 189
Instruction Formats

UNIT 10 INSTRUCTION FORMATS


NOTES Structure
10.0 Introduction
10.1 Objectives
10.2 Instruction Formats
10.2.1 Representation of Different Instruction Formats
10.3 Addressing Modes
10.4 Manipulation of Data Transfer and Control Program
10.4.1 Length of Instructions
10.4.2 Allocation of Bits
10.4.3 Types of Instructions
10.5 Answers to Check Your Progress Questions
10.6 Summary
10.7 Key Words
10.8 Self Assessment Questions and Exercises
10.9 Further Readings

10.0 INTRODUCTION

Instruction codes are important components of a computer design. These


instruction codes determine the working of the system. They are collections of bits
that instruct the processor to perform a specific operation. Each instruction
comprises several microoperations. Generally, an instruction consists of two parts:
(i) the operations to be performed and (ii) the data type on which these operations
are to be performed. In this unit, you will learn about the various types of instruction
formats and addressing modes as well as other issues involved in designing the
instruction set for a given system. The instruction format depends on the type of
operand and operation allowed for any system. Apart from the addressing mode,
the instruction format is also dependent on the architecture of the processor. This
unit will introduce you to the various types of instructions, such as common data
manipulation instructions and program control instructions.

10.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the features of instruction formats
 Explain the various types of addressing modes
 Understand data transfer manipulation and program control

Self-Instructional
190 Material
Instruction Formats
10.2 INSTRUCTION FORMATS
Every instruction is represented by a sequence of bits and contains the information
required by the CPU for execution. Depending on the format of instruction, each NOTES
instruction is divided into fields, with each field corresponding to some particular
interpretation. A general instruction format is given in Figure 10.1.

Opcode-field Address-field

Fig. 10.1 Instruction Format

 Opcode-field: It specifies the operation to be performed. The operation is


specified by a binary code, known as the operation code or opcode.
 Address-field: It provides operands on which operation is to be preformed
or provides the addresses of CPU register or main memory addresses which
store the operands. These operands can be classified as:
o Source operand reference: The operation may involve one or more
source operands; that is, operands that are the inputs for the operation.
o Result operand reference: The operation may produce a result, i.e.,
the operand stores output.
o Next instruction reference: This tells the CPU from where to fetch the
next instruction after the execution of the current instruction is complete.
All the temporary data can be stored either in the main memory or register
or can directly be sent to the input-output device. Thus, the address field
can be:
 Main (or virtual) memory: It contains the memory address of the main
memory.
 CPU register: CPU contains one or more registers that may be referenced
by machine instructions. If only one register exists, reference to it may be
implicit. If more than one register exists, then each register is assigned a
unique number, and the instruction must contain the number of the desired
register.
 Input/ output (I/O) device: The instruction must specify the I/O module
or device for the operation. If memory-mapped I/O is used, this is just
another memory address.
Since there are variable sources that are used for storing and retrieving
data, the question arises as to what should be the maximum number of addresses
one might need in an instruction.
Usually all arithmetic and logic operations either use one operand or unary
operations, such as increment operation, NOT operation, or use two operands or
binary operations, such as ADD, AND, etc. The result of an operation has to be
Self-Instructional
Material 191
Instruction Formats stored either in those registers that store input or in the address as specified in the
instruction. Finally, after the completion of an instruction, the next instruction must
be fetched, and the address from where the next instruction is fetched is required.
This line of reasoning suggests that an instruction could be required to contain
NOTES
two address references, two operands, one result and the address of the next
instruction. In practice, the address of the next instruction is handled by the program
counter and no explicit information is given in the instruction. Thus, the most common
instruction formats can have one, two or three operand addresses. Three-address
instruction formats are not common because they require a relatively long instruction
format to hold three address references.
10.2.1 Representation of Different Instruction Formats
Representation of each format and the execution of instructions are discussed as
follows.
1. 3-address format
The 3-address format instruction has three addresses in the instruction; two are
input or source addresses and the third is the destination address in which output
has to be stored. It can be represented as follows:
dst  [src1] [src2]
Where src1 and src2 are the source operands, dst is the destination operand
and * represents the operation specified in opcode field. For example, assume
that A, B, C, D, and X are variables stored in memory locations and are labelled
by their names (Figure 10.2).

ADD R1 A B R1  [A] + [B]


ADD R2 C D R2  [C] + [D]
MUL X R1 R2 X  [R1] * [R2]

Fig. 10.2 3-Address Format

2. 2-address format
The 2-address format instruction has two addresses in the instruction, both input
or source addresses. One of the input registers is used as the destination address
in which output has to be stored. It can be represented as:
dst  [dst] * [src]
Where src is the source operand, dst is the destination operand and * represents
the operation specified in opcode field. An example of the execution of instruction
using two address formats has been shown in Figure 10.3:

Self-Instructional
192 Material
Instruction Formats
MOV R1 A R1 [A]
ADD R1 B R1 [B] + [R1]
MOV R2 C R2 [C] NOTES
ADD R2 D R2 [D] + [R2]
MUL R2 R1 R2 [R1] * [R2]
MOV X R2 X [R2]

Fig. 10.3 2-Address Format

3. 1-address format
Only one address is used both as source as well as destination. It usually uses an
implied accumulator (AC) (Figure 10.4).

LOAD A AC [A]
ADD B AC [AC] + [B]
STORE R R [AC]
LOAD C AC [C]
ADD D AC [AC] + [D]
MUL R AC [AC] *[R]
STORE X X [AC]

Fig. 10.4 1-Address Format

4. 0-address format or stack addressing


The final addressing mode that we consider is stack addressing where a stack is a
linear array of locations. It is sometimes referred to as a push-down list of the last-
in-first-out (LIFO) queue. The stack is a reserved block of locations, i.e., it stores
information in a LIFO fashion. Items are appended to the top of the stack so that
at any given time the block is partially filled. For each stack, there is an associated
stack pointer whose value is the address of the top of the stack. The stack pointer
is maintained in a register. Thus, references to stack locations in memory are in
fact register indirect addresses. The stack mode of addressing is a form of implied
addressing. The machine instructions need not include a memory reference but
implicitly operate on the top of the stack.
In stack organization there are only two types of operations that are possible,
namely push and pop. The push operation places data onto the top of stack,
while pop operation removes the topmost element from the stack. These operations
can be used explicitly for execution of a program. However, in some cases, the
operating system implements it implicitly. For example, stack organization is
implemented implicitly for execution of subroutine calls and interrupts, as discussed
Self-Instructional
Material 193
Instruction Formats earlier. Some computers reserve a separate memory for stack operations, but
most computers utilize main memory for representing stacks. For accessing data
stored in stack, we need a stack pointer (SP) register. The SP register is initially
loaded with the address of the top of the stack. Each instruction pops its source
NOTES operands off the stack and pushes its result on the stack. In memory, the stack is
actually upside-down. So, when something is pushed onto the stack, the stack
pointer is decremented.
Stack uses Reverse Polish Notation (RPN) to solve arithmetic expression.
RPN is a way of representing arithmetic expressions. It avoids the use of brackets
to define priorities for evaluation of operators (Table 10.1). In the RPN scheme,
the numbers and operators are listed one after another. It can be thought of as
forming a stack, like a pile of plates. The operator always acts on the most recent
numbers and goes on the top of the stack. An operator takes the appropriate
number of arguments from the top of the stack and replaces them by the result of
the operation.
Table 10.1 Different Instruction Formats

Number of Addresses Symbolic Representation Interpretation


3 OP A, B, C A B OP C
2 OP A, B A  A OP B
1 OP A AC  AC OP A
0 OP T  T OP (T–1)

Let us consider the execution of the following instruction, using different addressing
schemes.
R = ( A+ B ) / (C – D × E):
(i) With one-address instructions (requiring an accumulator AC)
LOAD D AC  D
MUL E AC  AC × E
ADD C AC  AC – C
STOR Y Y  AC
LOAD A AC  A
SUB B AC  AC + B
DIV Y AC  AC / Y
STOR Y Y  AC

(ii) With two-address instructions


MOVE Y, A YA
SUB Y, B YY+B
MOVE T, D TD
MUL T, E TT×E
ADD T, C TT–C
DIV Y, T YY/T
Self-Instructional
194 Material
(iii) With three-address instructions Instruction Formats

SUB Y, A, D YA+B
MUL T, D, E TD×E
ADD T, T, C TT–C NOTES
DIV Y, Y, T YY/T

10.3ADDRESSING MODES
The instruction set is an important aspect of any computer organization. Every
instruction has primarily two components: opcodes and operands. You will learn
how to get operands on which all the manipulations are to be preformed. A simple
ADD operation along with opcode must also provide the information about how to
fetch the operands and where to put the result. Operands are commonly stored
either in main memory or in the CPU registers. If operand is located in the main
memory, the location address has to be given the instruction in the operand field.
Thus, if memory addresses are 32 bits, a simple ADD instruction will require three
32 bits-addresses in addition to opcode. The recent architecture provides a large
number of registers so that compilers can keep local variables in registers, eliminating
memory references. This results in a reduced program size and execution time.
As it is not possible to put all variables in registers, a memory reference is
required. It attempts to refer a large range of locations in main memory or, even
for some systems, virtual memory. One possibility is that they contain the memory
address of the operand but this will require large field to specify full memory
address. Also, the address must be determined at compile-time. Other possibilities
also exist, which provide both shorter specifications and the ability to determine
addresses dynamically. To achieve this objective, a variety of addressing techniques
have been employed. These techniques trade off between address range and/or
addressing flexibility, on the one hand, and the number of memory references and/
or complexity of address calculation, on the other. Basically, what an operand
stores is the effective address. The effective address (EA) of an operand is the
address of (or the pointer to) the main memory or register location in which the
operand is contained, i.e., operand = EA. There are two ways by which the control
unit determines the addressing mode used by an instruction:
(i) The opcode itself explicitly specifies the addressing mode used in the
instruction.
(ii) The use of a separate mode field in the instruction indicates the addressing
mode used.
The various modes of addressing are dicussed as follows:
1. Implied Mode
The operand is specified implicitly in the definition of the instruction as in the case
of an accumulator. Only the accumulator holds the operand and a stack organization
Self-Instructional
Material 195
Instruction Formats where the operand is the data stored on the top of stack. In both the cases, only
one operand is available for manipulation. So, an instruction just tells us about the
opcode and no field is required for operand, as shown in Figure 10.5.

NOTES IR Op
Fig. 10.5 Implied Addressing Mode
2. Immediate Addressing Mode
Immediate addressing is the simplest form of addressing where the operand is
actually present in instruction, i.e., there is no operand fetching activity as the
operand is given explicitly in the instruction. This mode can be used to define and
use constants or set initial value variables. An example is given as follows:
MOV 15, R1 (Load binary equivalent of 15 in register R1)
ADD 15, R1 (Add binary equivalent of 15 in R1 and store the result in R1)
ADD 5 (Add binary equivalent of 5 to contents of accumulator)
Advantage: The advantage of immediate addressing is that no memory reference
other than fetching of the instruction is required. As no memory reference is required
to obtain the operand, it has very small instruction cycle. Also, it is fast as memory
reference is reduced to one. It is commonly used to define and use constants, or
set initial values.
Disadvantage: The disadvantage of immediate addressing (Figure 10.6) is that
the size or the number of operations is the same as that of the address field,
which, in most instruction sets, is small as compared to the word length. Further, it
has a limited utility.
IR Op Operand
Fig. 10.6 Immediate Addressing Mode
3. Absolute Mode
In this mode, the operand’s address is explicitly given in the instruction. This address
can be in either a register or in a memory location, i.e., the effective address (EA)
of the operand is given in the instruction. Figure 10.7 shows the absolute mode of
addressing.
IR Op EA IR
Op EA Operand
R Operand

Absolute mode Absolute mode


(Register direct) (Memory direct) MM
Fig. 10.7 Absolute Mode of Addressing
(i) Direct addressing
The simplest addressing mode where an operand is fetched from memory is direct
addressing. In direct addressing, the address field contains the effective address
Self-Instructional
of the operand. Figure 10.8 shows the direct mode of addressing.
196 Material
Opcode Memory Address Instruction Formats

Instrucion
A
Memory
NOTES

Operand

Fig. 10.8 Direct Addressing Mode


This technique was common in earlier generations of computers. It requires
only one memory reference. As address field contains address of operand, no
special calculation is required for calculating effective address.
EA = A
e.g. ADD A
The value of operand is obtained from memory location, whose address is A, and
is added to content of accumulator. The obvious limitation in this scheme is that it
provides only a limited address space.
(ii) Register addressing
Register addressing is a way of direct addressing where the address field refers to
a register rather than the main memory address.
EA = R
The address field should store the reference of register. As 8–32 general-
purpose registers can be referenced, we need 3–5 address bits. As the CPU
registers are frequently used, register addressing is heavily used. There are a limited
number of registers (compared with the main memory locations). So, they must be
used efficiently. It is up to the programmer to decide which values should remain in
registers and which should be stored in main memory. Most modern CPUs employ
multiple general-purpose registers, placing the burden of efficient execution on the
assembly-language programmer (e.g., compiler writer). Thus, we should have good
assembly programmer or compiler who avoids frequent data transfer from register
to memory, leading to reduction in wastage of time in fetching data. So, if the
operand in a register is used in multiple operations, it results in a real saving. Figure
10.9 shows the register mode of addressing.
Instruction
R

Operand

Fig. 10.9 Register Addressing Mode Self-Instructional


Material 197
Instruction Formats Advantages: As there are only few registers in this mode, very small address
field is needed when compared to memory access addressing modes, resulting in
short instructions. Further, its execution is fast as no memory access is required.
Disadvantages: Address space is limited. Speed is achieved only when a good
NOTES
assembly programming or compiler writing is used.
4. Indirect Mode
In this mode, the register or the main memory location holds the EA of the operand.
The location where the operand is stored is calculated from address given in the
instruction.
EA = (A)
To implement such an instruction, first we look in A, then find address (A)
and fetch operand from that address. For example, in instruction ADD (A), add
contents of the cell pointed to by contents of A (content of A is memory location)
to accumulator. Figure 10.10 shows the indirect mode of addressing.
IR
IR
Op
Op EA
R
EA Operand Operand
Register Memory
indirect mode MM indirect mode MM

Fig. 10.10 Indirect Mode of Addressing

In the direct addressing mode, the length of the address field is usually less
than the word length. Thus, there is a limited address range. To overcome this
problem, one can use the address field that refers to the address of a word in
memory, which, in turn, contains a full-length address of the operand. The obvious
advantage of this approach is that for a word of length N, an address space of 2N
is available. Its disadvantage is that the instruction execution requires two memory
references to fetch the operand: one to get its address and the other to get its
value.
Although the number of words that can be addressed in this mode is equal
to 2N, the number of different effective addresses that may be referenced at any
one time is limited to 2 K, where K is the length of the address field. In a virtual
memory environment, all the effective address locations can be confined to page 0
of any process. Because the address field of an instruction is small, it will naturally
produce the low-numbered direct addresses, which would appear in page 0. When
a process is active, there will be repeated references to page 0, causing it to
remain in main memory. Thus, an indirect memory reference may involve more
than one page fault.
A rarely used variant of indirect addressing is multilevel or cascaded indirect
addressing:
EA = (…..(A)…..)
Self-Instructional
198 Material
In this case, one bit of a full-word address is an indirect flag (I). If the I bit Instruction Formats

is 0, then the word contains EA. If the I bit is 1, then another level of indirection is
invoked. There does not appear to be any particular advantage to this approach.
However, its disadvantage is that three or more memory references could be
required to fetch an operand in it. The multiple memory accesses to find an operand NOTES
makes it slower.
Register indirect addressing
Just as register addressing is analogous to direct addressing, register indirect
addressing is analogous to indirect addressing. In both cases, the only difference is
whether the address field refers to a memory location or to a register. Thus, for a
register indirect address:
EA = (R)
The advantages and disadvantages of register indirect addressing are basically
the same as of indirect addressing. In both the cases, the address space limitation
(limited range of address) of the address field is overcome by referring that field to
a word-length location containing an address. In addition, register indirect addressing
uses one less memory reference than indirect addressing (see Figure 10.11).
Instruction
R
Memory

Operand
Registers

Fig. 10.11 Register Indirect Addressing Mode

5. Displacement Addressing
Displacement addressing is a very powerful mode of addressing. It combines the
capabilities of direct addressing and register indirect addressing. It is known by a
variety of names depending on the context of its use. However, the basic mechanism
is the same.
EA = A + (R)
Displacement addressing (see Figure 10.12) requires that the instruction
should have two address fields, in which at least one is explicit. The value contained
in one address field is used directly, as in above case. The other address field can
be an implicit reference based on opcode, which refers to a register whose contents
are added to A to produce the effective address.

Self-Instructional
Material 199
Instruction Formats Instruction
R A
Memory

NOTES

Operand
Registers

Fig. 10.12 Displacement Addressing Techniques

6. Stack Addressing
The final addressing mode that we consider is stack addressing. A stack is a linear
array of locations. It is sometimes referred to as a push-down list of last-in-first-
out queue. It is a reserved block of locations. Items are appended to the top of the
stack so that, at any given time, the block is partially filled. Associated with the
stack is a pointer whose value is the address of the top of the stack. The stack
pointer is maintained in a register. Thus, references to stack locations in memory
are in fact register indirect addresses.
The stack mode of addressing is a form of implied addressing. The machine
instructions need not include a memory reference but should implicitly operate on
the top of the stack.
Another important issue is how to determine the addressing mode to be
followed. Virtually all computer architectures provide more than one addressing
mode. The question arises as to how the control unit can determine the address
mode to be used in a particular instruction. Several approaches are taken. Often,
different opcodes will use different addressing modes (Table 10.2). Also, one or
more bits in the instruction format can be used as a mode field. The value of the
mode field determines which addressing mode is to be used.
Table 10.2 Various Addressing Modes

Mode Algorithm Principal Advantage Principal Disadvantage


Immediate Operand = A No memory reference Limited operand magnitude
Direct EA = A Simple Limited address space
Indirect EA = (A) Large address space Multiple memory references
Register EA = R No memory reference Limited address space
Register Indirect EA = (R) Large address space Extra memory reference
Displacement EA = A + (R) Flexibility Complexity
Stack EA = top of stack No memory reference Limited capability

Notation:
A = Contents of an address field in the instruction
R = Contents of an address field in the instruction that refers to a register
Self-Instructional
200 Material
EA = Effective (actual) address of the location containing the referenced Instruction Formats

operand
(X) = Contents of location X
NOTES
Check Your Progress
1. What is the characteristic feature of the 3-address format?
2. Define immediate addressing.
3. State the major advantages and disadvantages of the direct addressing
mode.
4. What do you understand by a stack?

10.4 MANIPULATION OF DATA TRANSFER AND


CONTROL PROGRAM

Designing an instruction set for a system is a complex art. A variety of designs are
possible, each having its own trade off and advantages. Major concerns in designing
an instruction set are as follows:
10.4.1 Length of Instructions
The length of an instruction set depends on its memory size, bus architecture,
CPU complexity, etc. It should be the same as the number of bytes transferred
from memory in one cycle; otherwise, more fetch cycles would be required to
fetch a single instruction, creating a bottleneck at memory. Also, it is mandatory
that the instruction length should be multiple of character length, i.e., 8 bytes.
Some programs want a complex instruction set containing more instructions, more
addressing modes and greater address range, as in case of CISCs. Other
programmers, on the other hand, want a small and fixed-size instruction set that
contains only a limited number of opcodes, as in case of RISCs. The instruction
set can have variable-length instruction format primarily due to: (i) varying number
of operands, and (ii) varying lengths of opcodes in some CPUs.
10.4.2 Allocation of Bits
For a given length of instruction, the question arises as to how much bits should be
allocated for storing the opcodes and how many bits are required for storing operand
or its address. This allocation depends on the various factors, such as:
(i) Number of addressing modes: What type of addressing mode is
employed?
(ii) Number of operands used in an instruction: Today’s computers generally
provide a two-operand format.

Self-Instructional
Material 201
Instruction Formats (iii) Register or memory: Whether an operand is stored in register or memory.
(iv) Number of register sets used: The number of registers used has a great
impact on the design of an instruction set architecture and overall performance
of a computer. When we increase the number of registers, we will notice
NOTES
the following:
 The number of memory references used reduces as, now, more
frequently used data can reside in register, resulting in a lesser memory
reference.
 There is an increase in the size of an instruction word.
 There are greater demands on the compiler to schedule registers.
(v) Address range: The number of bits that can be referred in a computer is
related to the number of address bits.
(vi) Number of operations: If a large number of operations are designed,
then the number of bits of the instruction that are allocated for storing
opcodes will be more. For example, a basic computer using 4 bits possibly
generates 16 types of operations, and if a system requires 64 possible
operations, we have to allocate 6 bits to store an opcode.
(vii) Address granularity: An address can refer to a word or a byte depending
on the designer choice.
An instruction set is a group of bits that instruct the computer to perform a
specific operation. Each instruction comprises several microoperations. The
instruction is usually divided into parts, each part having different interpretations.
As said earlier, an instruction provides the operation code and information about
the operand on which this operation is executed. The operand on which an operation
is executed can be operand itself or it can be the address where operand is stored
(depending on the instruction format discussed earlier).
The most basic part of an instruction code is its operation part which specifies
what operation has to be performed. The operation code is a group of bits that
define such operations as add, subtract, AND, OR, move, shift, complement,
jump, etc. The number of bits required for the operation code must be large enough
to identify all operations. Thus, it depends on the total number of operations
available in the computer. In a system having M distinct operations such that M =
2n (or less), the operation code must consist of at least n bits. For example, if the
basic computer has 16 different operations, 4 bits are required for representing
opcode.
An instruction code must specify the address of the operand. It can be the
address of main memory if the operands are stored in main memory, or it can be
the address of the register in case of register-addressing modes. There are various
ways of arranging the binary code of instructions. Each computer has its own
instruction code format. However, an instruction set should satisfy the following
general rules:
Self-Instructional
202 Material
(i) Completeness: One should be able to test with a machine-language Instruction Formats

program any function that is computable, using a reasonable amount


of memory space. In other words, a suitable combination of assembly-
instructions should be sufficient to construct an algorithm. A computer
should have a set of instructions such that the user can construct a NOTES
machine-language program that can evaluate any computable function.
The set of instructions is said to be complete if the computer includes
a sufficient number of instructions in the above-mentioned categories.
(ii) Efficiency: The frequently required functions should be performed
using relatively few instructions. Also, the execution time should be
the minimum.
(iii) Coherence: The instruction set should be simple enough to keep the
CPU architecture simple.
Each instruction specifies an operation to be carried out and a set of operands or
data to be used. Instruction word is divided in specific fields to store operand and
opcode. For example, as shown in Figure 10.13, there is an 8-bit instruction in
which the first nibble (first 4 bit) is the opcode and the second one contains the
addresses of storage location in the main memory or in the registers.

Opcode Operands
Fig. 10.13 Instruction Format

In this case, as the opcode has 4-bit field, it is possible to have only 16 instructions.
The instruction set is a link between a hardware and software; it reflects the
programmer’s view of system state, the primitive operands and the basic operation
to be preformed on operands. Different types of computers have different instruction
sets. There are several options while selecting an instruction set, such as:
(i) Choosing a minimal yet complete set.
(ii) Choosing instructions based on their speed. (Thus, an instruction set should
comprise small instructions and also less memory access instructions. Such
instruction sets are used in the RISC architecture).
(iii) Choosing an elaborate instruction set encapsulating frequent instructions,
as in case of the CISC architecture.
An instruction can be considered as a function, which is defined to be
computable if it can be calculated in a finite number of steps by a Turing machine.
A simple CPU differs widely from a Turing machine. All processors have a finite
and small amount of memory. Therefore, you should choose an appropriate
instruction set to minimize logic circuits complexity. However, this choice can lead
to excessively complex programs. So, there is a fundamental compromise between
CPU simplicity and programming complexity.
Self-Instructional
Material 203
Instruction Formats Let us consider a simple high-level language statement: X = X + Y
If we assume a simple set of machine instructions, this operation could be
accomplished with three instructions: (assume X is stored in memory location 622,
and Y in memory location 625.)
NOTES
Take the following steps:
 Load a register with the contents of memory location 622.
 Add the contents of memory location 625 to the register.
 Store the contents of the register in memory location 622.
10.4.3 Types of Instructions
In general, all instructions fall into the following three categories:
 Data transfer instructions: The data transfer operations are concerned
with transfer of data between the various components of computer, such as
data transfer between two registers or between a register and the main
memory or from a register to any circuit (such as ALU) in the processor.
This transfer is usually done by the common bus architecture.
 Data manipulation instructions: Such an instruction performs all data
manipulation operations, such as add, subtract or logical operation. These
operations are executed by ALU of the processor.
 Program control instructions: Such an instruction basically controls the
flow of instructions within a program that depends on the decision parameter.
 Miscellaneous instructions: These instructions are not used as frequently
as data movement instructions.
Here, you will study each of these in detail.
1. Data Transfer Instructions
As we know, all processes under execution reside in the main memory in
the form of binary information and all the computations of instructions stored
in these programs are done in processor registers. Therefore, the user must
be capable of moving information between these two units. Data transfer
instructions are used to transfer or copy data from one location to another
in registers or in external main memory or in input–output devices without
changing its binary content, i.e., information stored in it. These allow the
processor to move data between registers and between memory and
registers (e.g., 8086 microprocessor has mov, push, pop instructions). A
‘move’ instruction and its variants are among the most frequently used
instructions in an instruction set. This data transfer can be categorized in the
following types:
 Processor register–memory: Data may be transferred from processor
to memory and vice versa.

Self-Instructional
204 Material
 Processor register–I/O: Data may be transferred to or from a peripheral Instruction Formats

device by transferring the content of a processor register to an I/O module


and vice versa.
 Processor register–processor register: Data transfer internally among
NOTES
the registers of the processor.
The common data transfer instructions, their mnemonics and actions are given in
Table 10.3.
Table 10.3 Common Data Transfer Instructions
S.No. Name Mnemonic Action
1 Load LD Data transfer from memory location to a processor
register, such as accumulator.
2 Store ST Data transfer from processor register to the memory
location.
3 Move MOV Data transfer from one register to another, especially in
case of CPU with multiple registers.
4 Exchange XCH Information swapping between two registers or a
register and a memory word.
5 Input IN Data transfer from input terminal to processor register.
6 Output OUT Data transfer from processor register to output
terminal.
7 Push PUSH Data transfer from processor register to memory stack.
8 Pop POP Data transfer from memory stack to processor register.

2. Data Manipulation Instructions


Data manipulation instructions perform operations on data and provide
computational capabilities for the computer. Data manipulation is basically
of the following three types:
 Arithmetic instructions
 Logical and bit manipulation instructions
 Shift instructions
All these instructions involve fetch phase in which data is read in binary
format from memory. The operands are brought to the processor register
from where they go to the ALU where the desired operation is performed.
Let us discuss each of these three types of instructions in detail.
(a) Arithmetic instructions
Arithmetic instructions are used to perform operations on numerical
data. There are four basic arithmetic operations: addition, subtraction,
division and multiplication. Many computers have hardware that
supports these systems. However, some processors just have addition
and subtraction circuits. In such processors, multiplication and division
are done through repeated additions and repeated subtractions,
respectively, using software subroutine. These four basic operations
are sufficient for any scientific calculations. Some typical arithmetic
operations are given in Table 10.4.
Self-Instructional
Material 205
Instruction Formats Table 10.4 Common Arithmetic Operations
S.No. Name Mnemonic Action taken
1 Increment INC Add 1 to value stored in register or memory
word.
2 Decrement DEC Subtract 1 from the value stored in register
NOTES or memory word.
3 Add ADD Add content of two register or data in
memory location.
4 Subtract SUB Subtract content of two registers or data in
memory location.
5 Multiply MUL Multiply content of two registers or data in
memory location.
6 Divide DIV Divide content of two registers or data in
memory location; add content of two
registers.
7 Add with carry ADDC Add with carry forwarded to it in content of
two registers or data in memory location.
8 Subtract with borrow SUBB Subtract with carry forward; borrow from
one register to another or from one memory
location to another.
9 Negate (2’s complement) NEG Find the negative of a number using 2’s
complement representation.
10 Absolute ABS Find absolute value of number.

Add, subtract, multiply and divide operations are executed differently


for different types of data, e.g., floating number addition involves finding
the number with higher exponent value and changing the mantissa value
of the other number so that exponents of both are same. Add the two
mantissas and then normalize the exponent in case the integer type of
data involves just the addition of two numbers.
(b) Logical and bit manipulation instructions
Logical and bit manipulation instructions are used to perform Boolean
operations on non-numerical data. These operations are performed
on the strings of bits stored in registers. Logical operations are
especially useful for comparing two operands and making logical
decision. Logical microoperations are useful in bit manipulation of
binary data, i.e., these are used for manipulating individual bits or a
portion of a word stored in a register. This is because the logical
operations consider each bit of operand separately and treat it as a
Boolean variable. They can be used to change bit values, delete a
group of bits, or insert new bit values into the register. The three basic
logical operations are: AND, NOT (complement) and OR. Other
logical operations can be performed combining these basic operations.
The complement microoperation is the same as 1’s complement, which
makes all 0 1and all 1 0. It is represented by putting a bar on the top
of the symbol that denotes the register name. The Ú symbol is used
to denote an OR microoperation and the Ú symbol is used to denote
AND microoperation. There are 16 different logical operations which
can be performed with two binary variables.

Self-Instructional
206 Material
The bits manipulation operations primarily involve three actions: Instruction Formats

selected bit can be cleared to 0, selected bit is set to 1, or selected bit


is complemented. A variety of bit manipulation operations, such as
selective set, selective complement, selective clear, (which may include
masking operation or inserting bits, etc.), are possible by combining NOTES
these three elementary operations. Table 10.5 presents important
fundamental logical operations and action taken in each of them.
Table 10.5 Common Logical Operations

S.No. Name Mnemonic Action taken


1 Clear CLR Replaces the content of
registers by 0.
2 Complement COM Produces 1’s complement of
data in register bit by
inverting all bits of operand.
3 AND AND Performs logical AND
operation bits stored on two
registers.
4 OR OR Performs logical OR
operation bits stored on two
registers.
5 Exclusive –OR XOR Performs logical XOR
operation bits stored on two
registers.
6 Clear carry CLRC Sets carry bit to 0.
7 Complement carry COMC Complements the carry bit to
0.
8 Set carry SETC Sets the carry bit to 1.
9 Enable interrupt EI Flip-flops that control the
interrupt is enabled
10 Disable interrupt DI Flip-flops that control the
interrupt is disabled
(c) Shift instructions
The shift operation is used to shift the content of an operand (data in
register or memory location) to one or more bits to provide necessary
variation. Shift Micro operations are used for serial transfer of data.
As we know that registers are collections of flip-flops, the contents of
the register (flip-flops) can be shifted to both sides: left or right. There
are three types of shift operations, namely logical, circular and arithmetic
operations, which are used for manipulating the contents of registers.
The input bit determines which type of shift operation is to be executed.
Data that is shifted off the end of the register or memory location is
either shifted into a flag register, which can be used to set a condition
flag, or is dropped, depending how the instruction is implemented.
The first flip-flop receives the serial input, and instantaneously the bits
in the register are shifted, left or right.
Self-Instructional
Material 207
Instruction Formats  During the shift left operation, the serial input transfers a bit into
the rightmost position and shifts each bit to adjacent left bit.
 During the shift right operation, the serial input transfers a bit
into the leftmost position and shifts each bit to adjacent right bit.
NOTES
Logical shift operations: In logical shift, operation 0 is transferred
as the serial input. It can be right or left, depending on whether 0 is
entered through the least significant bit or through the most significant
bit, respectively (Figure 10.14).

MSB

LSB
MSB

LSB
7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 1

0 0 1 0 1 1 1 0 0 0 0 0 0 0 1 0 1 1
Logical shift right Logical shift left
Fig. 10.14 An 8-bit Logical Shift Register

Circular shift operation (rotate operation) without carry: In


circular shift or bit rotation, the bits are ‘rotated’ as if the left and right
ends of the register were joined, i.e., the register is just like a circular
array. This operation just circulates the bits of the register. You can
see in Figure 10.15 that the last element becomes first and all other
bits are just shifted right. Thus, in this operation, all existing bits are
retained after the operation; only their positions are changed. Although
after this operation, the contents of a register will be different, the
relative positions of 0’s and 1’s remain the same. As this operation
retains all the existing bits, it is frequently used in digital cryptography.
This is accomplished by connecting the serial output of the shift register
to its serial input.
MSB

LSB

7 6 5 4 3 2 1 0 0
0 0 0 1 0 1 1 1 1

1 0 0 0 1 0 1 1 1

Right rotate through carry


MSB

LSB

7 6 5 4 3 2 1 0 0
0 0 0 1 0 1 1 1 1

0 0 1 0 1 1 1 1 0

Self-Instructional
Left rotate through carry
208 Material
Instruction Formats

MSB

LSB
7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1
NOTES

1 0 0 0 1 0 1 1

Circular right shift without carry

Fig. 10.15 Circular Shift Rotate without Carry and through Carry

Circular shift operations (rotate operation) through carry: Rotate


through carry is similar to the rotate without carry operations. The
only difference is that in the former, the two ends of the register are
considered to be separated by the carry flag. Thus, in this case, the bit
that is shifted in is the old value of the carry flag, and the bit that is
shifted out will become the new value of the carry flag.
Arithmetic shift operations: The arithmetic shift assumes that the
data being shifted is an integer in nature. Hence, in the result, the sign
bit is not shifted, maintaining the arithmetic sign of the shifted result. In
an arithmetic shift, the bits that are shifted out of either end are discarded.
In the case of a left arithmetic shift, zeros are shifted to the right.
However, in the case of a right arithmetic shift, the sign bit values are
shifted to the left.
An arithmetic shift left is equivalent to multiplication of a signed binary
number by 2 (see Figure 10.16). An arithmetic shift left inserts a 0 into
least significant bit (LSB) and shifts all other bits.

Fig. 10.16 Arithmetic Shift Left

Similarly, an arithmetic shift right operation is equivalent to division of


the number by 2 (see Figure 10.17). The arithmetic shift right leaves
the sign bit unchanged and shifts the number, including the sign bit, to
the right. The bit in LSB position is lost.

Self-Instructional
Material 209
Instruction Formats

MSB

LSB
7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1
NOTES

0 0 0 0 1 0 1 1

Fig. 10.17 Arithmetic Shift Right

When we use integer data, i.e., the arithmetic operations on signed


numbers, it is possible that the magnitude of a result exceeds the number
of bits assigned to represent the magnitude. In arithmetic left shift, the
initial sign bit of Rn–1 is lost and is replaced by bit at Rn-2. If Rn–1 and
Rn–2 have different values, the sign reversal will occur. In such a case,
the result changes the sign bit, making the result incorrect. We say it is
an overflow condition. An overflow flip-flop, Vs, can be used to detect
an arithmetic shift-left overflow. If an overflow occurs after an
arithmetic shift left will be observed only if initially, before the shift, Rn–
1
is not equal to Rn–2.
Vs = Rn–1 XOR Rn–2
If Vs = 0, there is no overflow; but if Vs = 1, there is an overflow and
a sign reversal after the shift. The Vs must be transferred into the
overflow flip-flop with the same clock pulse that shifts the register.
It can be observed that the logical and arithmetic left-shifts are exactly
the same operation. The only difference is that the logical right-shift
inserts bits with value 0, while the arithmetic shift copies the sign bit.
Hence, the logical shift is suitably used for unsigned binary numbers,
while the arithmetic shift is suitable for the signed two’s complement
binary numbers (see Table 10.6).
Table 10.6 Common Shift Operations

S.No. Name Mnemonic Action taken


1 Logical shift right SHR Logical left right
2 Logical shift left SHL Logical left shift
3 Arithmetic shift right SHRA Arithmetic right shift
4 Arithmetic shift left SHRL Arithmetic left shift
5 Rotate right ROR Circular right shift
6 Rotate left ROL Circular left shift
7 Rotate right through carry RORC Circular right shift with carry
8 Rotate left through carry RORL Circular left shift with carry

Self-Instructional
210 Material
RTL Description Instruction Formats

R  shl R Logical left shift register R


R  shr R Logical right shift register R
R  ashl R Arithmetic left shift register R
NOTES
R  ashr R Arithmetic right shift register R
R  cil R Circulate left register R
R  cir R Circulate right register R
3. Program Control Instructions
Decision-making capability is an important property of computers. It is based
on the various input conditions. Usually, the programs are sequential in nature,
i.e., instructions are stored in successive memory location. For the execution
of a program, first an instruction is decoded and executed, and, at the same
time, the program counter is incremented by one so that next instruction is
fetched in the consecutive clock. Thus, for those programs that are sequential
in nature, after execution of data manipulation or data transfer, the control
returns to fetch cycle with the program counter containing the address of
the next instruction in sequence. But in many programs, specially decision-
based ones, the flow is not always sequential. The program control may
change the address in the program counter and cause the flow of control to
be altered. Thus, program control instructions are those instructions that
may specify the conditions that alter the sequence of execution by changing
the contents of program counter. This decision capacity provides the control
over the flow of a program execution and branching to different segment of
a program, i.e., it causes a break in the sequence of instruction execution.
For example, the processor may fetch an instruction from location 129,
which specifies that the next instruction should be fetched from the location
182. The processor will not set the program counter to 182. Thus, nest
instruction will be fetched from 182 rather than 130.
The common program control instructions are: branch, jump, call a subroutine,
return, etc. These instructions are used to change the sequence in which the
program is executed. They check the status conditions and accordingly set
the sequence of the program. These instructions are concerned with
branching for loops and conditional control structures as well as with handling
subprograms. The commonly used instructions are: Jump (Branch), Jump
Conditional, Jump to Subroutine, Return, Execute, Skip, Skip Conditional,
Halt, Wait (Hold), No Operation, etc. Table 10.7 presents some typical
program control instructions.

Self-Instructional
Material 211
Instruction Formats Table 10.7 Common Program Control Operations
S.No. Name Mnemonic Action taken
1 Branch BR Branches to particular
location
2 Jump JMP Jumps to particular location
NOTES 3 Skip SKP Skips next instruction
4 Call CALL Calls a subroutine
5 Return RET Returns from subroutine to
main program
6 Compare (by subtraction) CMP Compares values by doing
subtraction
7 Test (by ANDing) TST Tests two or more conditions
by ANDing

Branch and jump may be conditional or unconditional. An unconditional


branch instruction causes a branch to specify location without any condition.
Thus, an address, such as BR ADR, is provided in instruction. In execution
of this instruction, the content of program counter becomes ADR, i.e., the
address of location where branching has to be done (the next instruction
will come from this location).
The conditional branch specifies the condition, depending on which
branching is to be done. For example, branch if output is zero or branch if
output is positive. If the conditions are fulfilled, the branch location is loaded
to the program counter. Otherwise, the program counter will not change
and the next instruction will be taken from the next location in the sequence.
Branch and jump are similar actions except the fact that both may use
different addressing modes.
A conditional skip action will skip the next instruction if the desired condition
is met. Thus, here the program counter is further incremented by one. If the
condition is not met, it will follow the normal sequence of execution. This is
popularly used in ‘if ‘statement where any condition is tested, and what will
be the future instruction is decided accordingly. The condition can be a
simple comparison of two numbers and setting up of the status bit. The
status bits, such as zero bit, sign bit, etc., are used for storing the result
obtained after testing of the conditions. These conditions can be simple or
they can be a combination of two or more conditions. The combination of
the condition can be done either by performing AND operation on the two
conditions or doing OR operation on the two conditions.
The last mode where control transfer takes up is call for a subroutine. It
usually occurs in case of call of interrupt. In this mode, after complete
execution of subroutine, the program returns to main program using return
instruction.

Self-Instructional
212 Material
Instruction Formats

Check Your Progress


5. Differentiate between the data manipulation instruction and program control
instruction. NOTES
6. What are the various categories of data transfer?
7. Differentiate between logical shift and arithmetic shift.

10.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The 3-address format instruction has three addresses in the instruction; two
are input or source addresses and the third is the destination address in
which output has to be stored.
2. Immediate addressing is the simplest form of addressing where the operand
is actually present in instruction, i.e., there is no operand fetching activity as
the operand is given explicitly in the instruction. This mode can be used to
define and use constants or set initial values.
3. The major advantage of the direct addressing mode is that for a word of
length N, an address space of 2N is available. Its main disadvantage is that
the instruction execution requires two memory references to fetch the
operand: one to get its address and the other to get its value.
4. A stack is a linear array of locations. It is sometimes referred to as a push-
down list of last-in-first-out queue. It is a reserved block of locations. Items
are appended to the top of the stack so that, at any given time, the block is
partially filled.
5. The data manipulation instruction performs all data manipulation operations,
such as add, subtract or logical operation. These operations are executed
by the arithmetic logic unit of the processor. The program control instruction,
on the other hand, controls the flow of instructions within a program that
basically depends on the decision parameter.
6. Data transfer can be categorized in the following types:
 Processor register-memory: Data may be transferred from processor
to memory and vice versa.
 Processor register-I/O: Data may be transferred to or from a peripheral
device by transferring the content of processor register to an I/O
module and vice versa.
 Processor register: Data transfer internally among the registers of the
processor.

Self-Instructional
Material 213
Instruction Formats 7. In logical shift operation, 0 is transferred as the serial input. It can be right
or left, depending on the whether 0 is entered through the least significant
bit or through the most significant bit, respectively. The arithmetic shift assumes
that the data being shifted is an integer in nature. Hence, in the result, the
NOTES sign bit is not shifted, maintaining the arithmetic sign of the shifted result. In
an arithmetic shift, the bits that are shifted out of either end are discarded.

10.6 SUMMARY

 Every instruction is represented by a sequence of bits and contains the


information required by the CPU for execution. Depending on the format
of instruction, each instruction is divided into fields, with each field
corresponding to some particular interpretation.
 The 3-address format instruction has three addresses in the instruction; two
are input or source addresses and the third is the destination address in
which output has to be stored.
 The 2-address format instruction has two addresses in the instruction, both
input or source addresses. One of the input registers is used as the destination
address in which output has to be stored.
 Only one address is used both as source as well as destination. It usually
uses an implied accumulator.
 The final addressing mode that we consider is stack addressing where a
stack is a linear array of locations. It is sometimes referred to as a push-
down list of the last-in first-out (LIFO) queue.
 Immediate addressing is the simplest form of addressing where the operand
is actually present in instruction, i.e., there is no operand fetching activity as
the operand is given explicitly in the instruction.
 In absolute addressing mode, the operand’s address is explicitly given in
the instruction. This address can be in either a register or in a memory
location, i.e., the effective address (EA) of the operand is given in the
instruction.
 Displacement addressing is a very powerful mode of addressing. It combines
the capabilities of direct addressing and register indirect addressing.
 The data transfer operations are concerned with transfer of data between
the various components of computer, such as data transfer between two
registers or between a register and the main memory or from a register to
any circuit (such as ALU) in the processor.
 Data manipulation instructions performs all data manipulation operations,
such as add, subtract or logical operation. These operations are executed
by ALU of the processor.
Self-Instructional
214 Material
 Program control instructions controls the flow of instructions within a program Instruction Formats

that depends on the decision parameter.

10.7 KEY WORDS NOTES


 Instruction set: The collection of all the machine-language instructions
available to the programmer.
 Opcode-field: An instruction field which specifies the operation to be
performed.
 Address-field: An instruction field which provides the operands on which
operation is to be preformed or provides the addresses of CPU register or
main memory addresses which store the operands.
 Displacement addressing: A mode of addressing which combines the
capabilities of direct addressing and register indirect addressing.
 Direct addressing: The simplest addressing mode where operand is fetched
from memory.
 Register addressing: A way of direct addressing in which the address
field refers to a register rather than the main memory address.

10.8 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. Differentiate between the 2-address format and 1-address format.
2. What is register addressing? State its advantages and disadvantages.
3. Write a short note on stack addressing.
4. What is the difference between arithmetic instructions and logical
instructions?
5. Write a short note on program control instructions.
Long Answer Questions
1. Write short notes on:
(i) Direct addressing
(ii) Relative addressing
(iii) Stack addressing
2. Write five data transfer, data manipulation and program control instructions
and explain their functions.

Self-Instructional
Material 215
Instruction Formats 3. What do you understand by arithmetic instructions? Explain the features of
some typical arithmetic instructions.
4. Explain program control instructions.
NOTES
10.9 FURTHER READINGS
Mano, M. Morris. 1992. Computer System Architecture. New Delhi: Prentice-
Hall.
Mano, M. Morris. 2000. Digital Logic and Computer Design. New Delhi:
Prentice-Hall.
Mano, M. Morris. 2002. Digital Design. New Delhi: Prentice-Hall.
Stallings, William. 2007. Computer Organisation and Architecture. New Delhi:
Prentice-Hall.

Self-Instructional
216 Material
Input-Output Organization

UNIT 11 INPUT-OUTPUT
ORGANIZATION
NOTES
Structure
11.0 Introduction
11.1 Objectives
11.2 Peripheral Devices
11.2.1 Storage Devices: Hard Disk
11.2.2 Human-interactive I/O Devices
11.3 Input/Output (I/O) Interface
11.3.1 Problems in I/O Device Management
11.3.2 Aims of I/O Module
11.3.3 Functions of I/O Interface
11.3.4 Steps in I/O Communication with Peripheral Devices
11.3.5 Commands Received by an Interface
11.4 Asynchronous Data Transfer
11.4.1 Strobe Control
11.4.2 Handshaking
11.4.3 Asynchronous Serial and Parallel Transfers
11.5 Modes of Data Transfer
11.6 Answers to Check Your Progress Questions
11.7 Summary
11.8 Key Words
11.9 Self Assessment Questions and Exercises
11.10 Further Readings

11.0 INTRODUCTION

In this unit you will learn about the peripheral devices and I/O interface. There are
a variety of input/output (I/O) devices available in the market. These devices are
also known as peripheral equipment as they are attached to a computer externally,
i.e. they are not a part of the motherboard. The various I/O hardware devices
available for different purposes are storage devices (disk), transmission devices
(network cards and modems) and human-interface devices (screen, keyboard
and mouse). Some devices may be used for more than one activity, e.g. a disk can
be used both for input and output. Input devices are used for receiving the data
from a user and transferring it to the central processing unit (CPU). Output devices
receive data from the CPU and present it to the end user.
An I/O interface is an entity that controls the data transfer from external
device, main memory and/ or CPU registers. You can say that it is an interface
between the computer and I/O devices (external devices) and is responsible for
managing the use of all devices that are peripheral to a computer system. You will
also learn the asynchronous data transfer.
Self-Instructional
Material 217
Input-Output Organization
11.1 OBJECTIVES

After going through this unit, you will be able to:


NOTES  Explain the features of the various peripheral I/O devices
 Explain the features of I/O interface
 Describe how data is transferred asynchronously

11.2 PERIPHERAL DEVICES

The peripheral devices can be thought of as transducers, which can sense physical
effects and convert them into machine-tractable data. For example, a computer
keyboard, which is one of the most common input devices, accepts input by the
pressing of keys, or by physically moving cursor using mouse. Such physical actions
produce a signal that the processor translates into a byte stream or bit signal so
that it can understand it. Similarly, if we consider an output device like a computer
monitor screen, it accepts a bit stream generated by a processor which is further
translated into the signal that controls the movement of the electronic beam that
strikes the screen. The pixel combination produces a picture on the monitor screen.
Some devices mediate both input and output, e.g. memory or a disk drive.
The various types of I/O devices have been discussed here after.
11.2.1 Storage Devices: Hard Disk
A hard disk is one of the important I/O devices and is most commonly used as
permanent storage device in any processor. Due to improvement in technology
and density of magnetic disk, it has become possible to have disks with larger
capacity and at a cheaper rate.
Diskette (soft disk, floppy disk)
It is a 10.5-inch diskette with a capacity of 1.44 MB. The architecture is similar to
that of a hard disk, i.e. it is divided into concentric tracks, which are further divided
into sectors.
Magnetic tape
A magnetic tape consists of a plastic ribbon with a magnetic surface. The data is
stored on the magnetic surface as a series of magnetic spots.
Optical disk
A variety of optical disks are available in market, e.g. CD-ROM, DVD having
storage capacities in the range of 128 MB to 1 GB, etc. These disks read the data
by reflecting pulses of laser beams on the surface. It is usually written once with a
high-power laser that burns spots in a dye layer and turns it dark that appears as
pit on the surface. Such pits are read by a laser beam that reflects into a
phototransistor. Due to variations in the thickness of the disk, vibrations, etc. a
Self-Instructional
218 Material
focusing lens is used to image the pits onto the phototransistor.
USB flash drives (commonly called pen drives) Input-Output Organization

These are typically small, lightweight, removable and rewritable. They are one of
the most popular modes used for data transfer because they are more compact
and generally faster, able to hold more data and more reliable (due to their lack of
NOTES
moving parts and their more durable design) compared to the floppy disks. These
are NAND-type flash memory data storage devices integrated with a universal
serial bus (USB) interface.
Magneto-optical disk
A magneto-optical disk is based on the same principle as the optical disk. Both
have capacities in the range of 128 MB, 230 MB, 1.3 GB. The only difference is
that it uses a layer of magnetic grains that are reoriented by the magnetic write
head so that they either block or allow light to reflect off of the backer. As in a
floppy disk, the read-write media is stored in a self-sealing rigid case. The time
required to access the data is 16 to 30 minutes, with transfer rate of 2 to 3 MB/s.
11.2.2 Human-Interactive I/O Devices
The human-interactive devices can be further categorized as direct and indirect.
The direct devices are those that interact with people. These devices respond to
human action and display information in real-time at a rate that complements the
capabilities of people. The main job of these devices involves the translation of
data between human-readable to machine-readable forms and vice versa. The
direct I/O devices include the keyboard, mouse, trackball, screen, joystick, drawing
tablet, musical instrument interface, speaker and microphone.
Indirect devices do not interact with users. These device are used where
human beings are not directly involved in accepting the input or producing the
output such as a scanner or a printer. These devices also perform the data translation
in the format acceptable to machine. But they do not respond directly to a human
in real-time.
The human-interactive devices can further be classified into input and output types:
1. Input Devices
Input devices collect the information from the end user or from a device and convert
this information or data into a form, which can be understood by the computer. An
input device is characterized as good if it can provide useful data to the main
memory or the processor directly and timely for processing. Some common input
devices which allow to communicate with the computer are as follows:
(i) Keyboard
A keyboard is one of the most common input devices attached to all
computers. This input device may be found as part of an on-line/interactive
computer system used for entering the character. The layout of keyboard is
similar to the traditional typewriter of the type QWERTY as it is designed
basically for editing the data. The keyboards of a computer contain some Self-Instructional
Material 219
Input-Output Organization extra command keys and function keys. They contain a total of 101 to 104
keys. One can input the data by pressing the correct combination of keys
to input data.
(ii) Pointing Devices
NOTES
There are many pointing devices, such as light pen, joystick, mouse, etc.
(a) Mouse
Of all the pointing devices, the mouse is the most popular device used
with keyboard for accepting input. Its popularity is primarily due to
the fact that it provides very fast cursor movement providing the user
the freedom to work in any direction.
(b) Joystick
A joystick is specially used in systems that are designed for gaming
purposes. It is based on the principle of electricity, i.e. it is a resistive
device. It consists of a stick that turns the two shaft potentiometers,
one for X direction and the other for Y direction. The movement of
stick is just like the volume knob on a radio. Different positions of
potentiometer result in different voltage outputs. Using an analog-to-
digital converter (ADC), the output from the potentiometer’s resistance
at that particular position is converted into a corresponding number.
Thus, in case of joystick also, the distance covered will give a particular
output. This output of the ADC is then serialized and sent to the
computer for further processing in similar manner as in a keyboard.
(iii) Voice input systems
A system that enables a computer to recognize the human voice is called
the voice-input system. The two commonly used voice input systems are
microphone and voice recognition software.
(a) Microphone
The microphone turns acoustical pressure into a variation in voltage.
The digital value of this voltage is obtained by dividing the analog
signal at regular intervals (the sampling rate) and average integer value
of each sample is accepted as output. This digitized signal can be
used for recording, as in audio CD or can be converted into text by
processing it by voice recognition software.
(b) Voice recognition software
It is a complex software. To extract phonemes and whole words from
a voice message, you need a software that is a combination of both
signal processing and artificial intelligence techniques. Thus, a very
powerful machine and a dedicated signal processing computer to
implement it is required. But then also it may be limited to a single
person for which it is trained or if there are multiple speakers you have
to limit for just a small number of words and phrases.
Self-Instructional
220 Material
(iv) Source data automation (scanner) Input-Output Organization

Scanner is used to accept an input in any graphical format, store it in digital


format and display it back if required. It is an optical device that can read
the text or illustrations printed on paper and translate this information into a
NOTES
form that a computer can use.
The common optical scanner devices are magnetic ink character recognition
(MICR), optical mark reader (OMR) and Optical Character Reader
(OCR).
(a) MICR
It is popularly used technique in banking sector. All banks now issue
cheques and drafts. As cheques enter an MICR machine, they pass
through the magnetic field which causes the read head to recognize
the character of the cheques. It has vastly helped the banking sector in
authenticating the cheques.
(b) OMR
It is vastly used in evaluating the objective answer sheets. The students
appearing in an objective test have to mark the answer by darkening
a square or circular space in their answer sheets with pencil. These
answer sheets are directly fed to OMR machine, which by observing
markings evaluates the sheets.
(c) OCR
It can read any printed character by comparing the pattern that is
stored in computer. A character of a handwritten image is kept on a
piece of paper; the paper is put inside the scanner. The scanned pattern
is compared with pattern information stored inside the computer. Only
those patterns that are matched are read, this process is called a
character read and the remaining unidentified patterns are rejected.
(v) Digital camera (video camera and tape)
A video camera records the image, converts it into a digital format via an
ADC and stores it on a frame buffer. A data rate of 28 MB/s can be achieved
for a fully digitized system where there is no compression. But it can be
improved to 80 KB/s, by using compression, which can lead to the loss of
some information.
(vi) Sensor
Sensors are non-interactive type of devices, i.e. they are the devices which
accept the non-online input and send this input data to computers. The
inputs of sensors are the physical properties of devices, such as temperature,
magnetic field, etc. Based on these properties, various types of sensors are
designed, such as chemical sensors (that sense chemical combination),
temperature sensors (that sense temperature), magnetic field sensors, etc.
Self-Instructional
Material 221
Input-Output Organization (vii) Actuator
Actuators are also non-interactive input devices widely used for accepting
input from control devices, such as switches, valves, solenoids, motors,
stepper motors, linear motors, lights, lasers, electron beams, X-rays,
NOTES
hydraulic pumps, and so on, that are controllable by computers. In these
devices also the data transfer rates vary from B/s to KB/s.
2. Output Devices
Output devices are those equipment that accept data and programs from the
computer and provide them to users. Output devices are commonly referred to as
terminals. Terminals can be classified into the following two types: (i) a hard copy
terminal that provides a printout on paper (ii) a soft copy terminal that provides a
visual copy on the monitor.
Terminals can also be classified as dumb terminals or intelligent terminals depending
upon how they work.
Some important output devices are discussed as follows:
(i) Visual display unit (Monitor)
Visual display unit (VDU), popularly known as monitor, is the most popular
output device. It resembles a television screen. This device may form part of
an interactive computer system that displays a response, message or request
received from the computer to the user. No further processing will take place
until the necessary action is taken by the user. The response time from the
user is inevitably far slower than any action undertaken by the processor.
(ii) Printer
Printer is a hardcopy terminal used to get a printed copy of the processed
text or result on paper. A large variety of printers are available in the market,
with each designed for different applications. Printers are typically
categorized according to speed, the method of printing (e.g. impact or non-
impact printing) and the quality of output (e.g. letter quality, high, low, etc.).
Line printers are considered impact printers, where the letters themselves
make contact with the paper surface. This contact involves a high degree of
mechanical movements to produce output. As a result, impact printers are
typically slower than non-impact printers. Laser printers are non-impact
printers. No keys physically hit the paper. In laser printer, a beam of light
writes an image onto the surface of the drum (which forms part of the printer).
This, in turn, causes the toner (form of ink) to be deposited and transferred
to the paper. Very fast laser printers with a high standard of output are now
available.
Impact printing and non-impact printing are discussed as follows:
(a) Impact printing
In impact printing, each character is printed on the paper by striking a
Self-Instructional pin or hammer against an inked ribbon. According the striking pattern,
222 Material
the desired shape appears on the paper. Because hammering is the Input-Output Organization

mechanical process, such printers have very slow speed. The most
common printer based on this technology is Dot-matrix printer, which
can print typically 120 to 200 characters per second.
NOTES
(b) Non-impact printing
The non-impact printing technology prints characters and other images
on the paper, or any surface by using principles of electrostatic chemical,
heat, lasers, photography or ink-jets. Ink-jet printers and laser-jet
printers are prominent examples of non-impact printing.
 Ink-jet printers
These printers spray tiny droplets of coloured inks on the paper.
The pattern of printing depends on how nozzle sprays the ink,
which has a quality to get dried within few seconds.
 Laser-jet printers
The working of laser-jet printers is similar to photocopiers.
Nowadays, there is a tendency to design a device, which is hybrid
of photocopiers, scanners and printers. In laser-jet printers, there
is rotating drumon which the paper is rotated. Such printers use
a low-power laser that charges the paper on the drum with a
small electrical charge at the point where a black dot is required.
This paper is then passed over a toner tray. The toner tray contains
toner, a fine black powder, which is attracted to the paper
wherever it is charged.
(iv) Plotters
Plotters are used for printing the big charts, drawings, maps and three-
dimensional illustrations, specially used for architectural and designing
purposes.

Check Your Progress


1. What is the difference between direct and indirect I/O devices?
2. What are voice input systems?
3. What are the different types of optical scanner devices?

11.3 INPUT/OUTPUT (I/O) INTERFACE

An I/O interface is an entity that controls the data transfer from external device,
main memory and/ or CPU registers. You can say that it is an interface between
the computer and I/O devices (external devices) and is responsible for managing
the use of all devices that are peripheral to a computer system. It attempts to make
an efficient use of all available devices while retaining the integrity of data. Various
features of I/O interface are discussed as follows: Self-Instructional
Material 223
Input-Output Organization 11.3.1 Problems in I/O Device Management
Some of the major problems with the I/O device management are as follows:
 There are various peripherals working on different principles. For example,
NOTES few of them work on electromechanical principle, few on electromagnetic
principle and few on optical principle, and so on. As each of them use
different methods of operation, it is impractical for the processor to
understand and interpret all. Thus, designing an instruction set that can convert
the signals into corresponding input value for all devices is not possible.
 As a new I/O device is designed on some new technology, it is required to
make the device compatible with the processor. Designing an instruction
set for every new device is not at all feasible.
 The rate of data transfer is usually much slower than the processor and
memory. Therefore, it is not logical to use the high-speed system bus that
communicates directly between I/O device and processor. A synchronization
mechanism is required for data transfer to be handled smoothly.
 Peripheral devices accept input in variety of formats. Thus, they may use
different data formats and word lengths as used in processor and main
memory.
 The operating mode of I/O devices is different for different devices. It must
be controlled so that it may not disturb the operation of other devices
connected to the processor.
To resolve these problems, there is a special hardware component between
CPU and peripheral to supervize and synchronize all input and output transfers.
Figure 11.1 illustrates the relationship between the CPU, the peripheral interface
chip and the peripheral device. Although the peripheral interface chip may appear
just like a memory location to the CPU, it contains specialized logic that allows it
to communicate with the external devices. There are a number of such I/O
controllers in a processor for controlling one or more peripheral devices.

Address bus
Periph eral
Data bus interface chip Peripheral
CPU
device
Control bus
Peripheral
CPU side bus

Part of the computer An external device

Fig. 11.1 Relationship between CPU, Peripheral Interface Chip and Peripheral Device

Self-Instructional
224 Material
11.3.2 Aims of I/O Module Input-Output Organization

The I/O modules are designed with the aims to:


 Achieve device independence
It aims to facilitate more simplified software development. In other words, NOTES
it removes the complexities of individual devices and provides a ‘translator’
for the use of the device.
 Handle errors
It should ensure that I/O data are correctly handled. It informs the users in
event of detection of any error.
 Speed up transfer of data
As the I/O is typically the slowest part involved in a program’s execution,
the direct memory access method will apply on a range of algorithms to
enhance both the software and hardware speeds.
 Handle deadlocks
It should monitor conditions that can ‘lock up’ a system (e.g. resource
holding) and should take steps to avoid these conditions.
 Enable multi-user systems to use dedicated devices
It should assign sensible printing instructions, while trying to prevent the
erroneous output of data.
Each device may have its own controller that supervizes the operations of
that device. A typical communication bus system between processor and
devices is shown in Figure 11.2.
Data bus
Central
processing Address bus
unit
(CPU) Control

Interface Interface Interface Interface

Keyboard CRT Magnetic


Printer
display disk
Input Output Output Input and output
device device device device

Fig. 11.2 Connections between I/O Devices and Processor through I/O Bus

There are three types of buses, namely data bus, address bus and control
bus. Each device has an interface through which it is connected to a bus
(Figure 11.2). The interface decodes the signal received from the input device
in the format that the processor can understand, and also interprets the
control signal received from the processor for peripheral devices. It
Self-Instructional
Material 225
Input-Output Organization supervises and synchronizes the data flow between external device and
processor. Many devices also have a controller, which may or may not be
physically integrated on the interface chip. The controller is often used for
buffering the data, e.g. IDE is used as a disk controller.
NOTES
11.3.3 Functions of I/O Interface
The main functions of the interface are:
 Contr ol and timing signals
Coordination in the flow of traffic between internal and external devices is
done by control and timing signals.
 Processor communication
As a bus is usually employed for data transfer, each interaction between the
CPU and the I/O module involves bus arbitration. As the processor needs
to communicate with the external device, I/O module must perform the
following actions:
o Command decoding
I/O module accepts commands, sent as signals on the control bus,
from the processor.
o Data
Through data bus, the data is exchanged between the processor and
I/O module over the data bus.
o Status reporting
Different devices have different speeds. Few are very slow compared
to processor. Hence, it is required for I/O module to know the status
before the processor sends the data. Along with various error signals
used to verify the data sent, the common status signals used are BUSY
and READY.
o Address recognition
I/O module must recognize a unique address for each peripheral it
controls.
 Device communication
I/O module has to communicate with device to fetch status information,
data transfer rate, etc.
 Data buffering
Data comes from main memory in rapid burst and must be buffered by the
I/O module and then sent to the device at the latter’s rate.
 Error detection
I/O module not only detects error but also reports these errors to the CPU.
Figure 11.3 shows the block diagram of an I/O interface.
Self-Instructional
226 Material
Interface to Interface to Input-Output Organization
System Bus External Device

External
Data
Data
Data Registers
Device Status
NOTES
Lines Interface
Status Control Registers Logic Control

Address Data
External
Lines I/O Device
Logic Interface Status
Controll Logic
Lines Control

Fig. 11.3 Block Diagram of an I/O Interface

11.3.4 Steps in I/O Communication with Peripheral Devices


The various steps taken for I/O communication with peripheral devices are as
follows:
 Processor sends device address of the device it wants to communicate
with on the address bus.
 Interface attached to I/O bus contains address decoder. When an interface
finds that it device address is on the address line, it activates its path between
the bus line and the devices that it controls.
 Processor interacts with I/O module to check the status of external device.
 I/O module returns status.
 The processor provides the operation code on the control line.
 If device is ready, processor gives I/O module command to request data
transfer
 I/O module gets a unit of data from device.
 Data is transferred from the I/O module to the processor.
 Interface interprets the opcode and proceeds accordingly
11.3.5 Commands Received by an Interface
There are four types of commands that an interface may receive.
 Control command: This activates the device and informs the device what
action to be performed. A particular control command depends on a
particular device.
 Status command: Before the peripheral device performs the action required
by the processor, It should first check the status of device and interface. In
other words, the printer should not get new data until it has printed the
pervious data. If there is an error in the device, the same may be responded
back to the processor. Self-Instructional
Material 227
Input-Output Organization  Data output command: This transfers data from the bus into one of the
interface registers.
 Data input command: The interface receives data from the device and
places it in its registers which can be forwarded to the processor by putting
NOTES
these data on the data line of the bus.

11.4 ASYNCHRONOUS DATA TRANSFER

All the operations in a digital system are synchronized by a clock that is generated
by a pulse generator. The CPU and I/O interface can be designed independently
or they can share common bus. If CPU and I/O interface share a common bus,
the transfer of data between two units is said to be synchronous. There are some
disadvantages of synchronous data transfer, such as:
 It is not flexible as all bus devices run on the same clock rate.
 Execution times are the multiples of clock cycles (if any operation needs
10.1 clock cycles, it will take 4 cycles).
 Bus frequency has to be adapted to slower devices. Thus, one cannot take
full advantage of the faster ones.
 It is particularly not suitable for an I/O system in which the devices are
comparatively much slower than processor.
In order to overcome all these problems, an asynchronous data transfer is
used for input/output system.
The word ‘asynchronous’ means ‘not in step with the elapse of time’. In
case of asynchronous data transfer, the CPU and I/O interface are independent of
each other. Each uses its own internal clock to control its registers. There are
many techniques used for such data transfer.
11.4.1 Strobe Control
In strobe control, a control signal, called strobe pulse, which is supplied from
one unit to other, indicates that data transfer has to take place. Thus, for each
data transfer, a strobe is activated either by source or destination unit (see Figure
11.4). A strobe is a single control line that informs the destination unit that a valid
data is available on the bus. The data bus carries the binary information from
source unit to destination unit.
Data transfer from source to destination
The steps involved in data transfer from source to destination are as follows:
(i) The source unit places data on the data bus.
(ii) A source activates the strobe after a brief delay in order to ensure that data
values are steadily placed on the data bus.
Self-Instructional
228 Material
(iii) The information on data bus and strobe signal remain active for some time Input-Output Organization

that is sufficient for the destination to receive it.


(iv) After this time the sources remove the data and disable the strobe pulse,
indicating that data bus does not contain the valid data.
NOTES
(v) Once new data is available, the strobe is enabled again.
Figure 11.4 shows the source-initiated strobe for data transfer.
Data bus
Source unit Strobe Destination unit

Data bus

Strobe

Fig. 11.4 Source-Initiated Strobe for Data Transfer

Data transfer from destination to source


The steps involved in data transfer from destination to source are as follows:
(i) The destination unit activates the strobe pulse informing the source to provide
the data.
(ii) The source provides the data by placing the data on the data bus.
(iii) The data remains valid for some time so that the destination can receive it.
(iv) The falling edge of strobe triggers the destination register.
(v) The destination register removes the data from the data bus and disables
the strobe.
Figure 11.5 shows the destination-initiated strobe for data transfer.

Data bus
Destination unit Strobe Source unit

Data bus

Strobe
Fig. 11.5 Destination-Initiated Strobe for Data Transfer

The disadvantage of this scheme is that there is no surety that destination


has received the data before source removes the data. Also, destination unit initiates
the transfer without knowing whether source has placed data on the data bus.
Thus, another technique, known as handshaking, is designed to overcome
these drawbacks.
Self-Instructional
Material 229
Input-Output Organization 11.4.2 Handshaking
The handshaking technique has one more control signal for acknowledgement
that is used for intimation. As in strobe control, in this technique also, one control
NOTES line is in the same direction as data flow, telling about the validity of data. Other
control line is in the reverse direction, telling whether the destination has accepted
the data.
Data transfer from source to destination
In this case, there are two control lines, Request and reply (Figure 11.6). The
sequence of actions taken is as follows:
(i) Source initiates the data transfer by placing the data on data bus and enable
request signal.
(ii) Destination accepts the data from the bus and enables the reply signal.
(iii) As soon as source receives the reply, it disables the request signal. This
also invalidates the data on the bus.
(iv) Source cannot send new data until destination disables the reply signal.
(v) Once destination disables the reply signal, it is ready to accept new signal.

Data bus
Request
Source unit Destination unit
Reply

Data bus

Request

Reply

Fig. 11.6 Source-Initiated Data Transfer Using the Handshaking Technique

Data transfer from destination to source


The steps taken for data transfer from destination to source are as follows:
(i) Destination initiates the data transfer sending a request to source to send
data telling the latter that it is ready to accept data (see Figure 11.7).
(ii) Source, on receiving request, places data on data bus.
(iii) Also, source sends a reply to destination telling that it has placed the requisite
data on the data bus and has disabled the request signal so that destination
does not have new request until it has accepted the data.
Self-Instructional
230 Material
(iv) After accepting the data, destination disables the reply signal so that it can Input-Output Organization

issue a fresh request for data.


Data bus
Request NOTES
Destination unit Source unit
Reply

Data bus

Request

Reply
Fig. 11.7 Destination-Initiated Data Transfer Using the Handshaking Technique

Advantages of asynchronous bus transaction


 It is not clocked.
 It can accommodate a wide range of devices.
11.4.3 Asynchronous Serial and Parallel Transfers
The data transfer can be serial or parallel. Thus, to transfer a 16-bit data in parallel
format, we require 16 transmission lines, one line for each bit. In serial transfer,
each bit is sent one after another in a sequence of event. Serial transmission is
slow. However, at the same time, it requires just one line, hence, simple to implement.
Parallel transmission, on the other hand, requires multiple path and is a faster
mode of transmission.
Keyboard
I/O write
I/O read
Input Power
Address bus register
Keyboard clock
Output Keyboard serial data
CPU Microcontroller
Data bus register Ground
Microcontroller
Control
Interrupt request register

Status
register

Fig. 11.8 Keyboard Controller and Interface

The keyboard (Figure 11.8) has a serial asynchronous transfer mode. In this
technique, the interactive terminal inserts special bits at both ends of the character
code. Thus, each character transmission has three types of bits: a start bit, the
Self-Instructional
Material 231
Input-Output Organization character bits and stop bits. Usually the transmitter rest at 1 state it happens when
no transmission is done. The first bit, which is 0, is first sent indicating that the
character transmission has begun. The last bit is always 1 (Figure 11.9).

NOTES 1 1 0 0 0 1 0 1

Start Stop
Character bits
bit bits
Fig. 11.9 Format of Asynchronous Serial Data Transfer

The various stages in an asynchronous data transmission are as follows:


1. When no transmission is done, the line is kept at 1 state.
2. The character transmission initiates with a start bit which is always 0.
3. The receiver can detect the start bit when line goes from the 1 to 0.
4. The character follows the start bit.
5. The receiver knows the transfer rate and the number of bits to be transferred.
6. After the last bit of character is sent, one or two stop bits of 1 are sent.
7. The stop bit is detected when the line returns to the 1-state for at least one bit.

11.5 MODES OF DATA TRANSFER

Let us summarize the steps taken to write a block of memory to an output port
such that one byte is transferred at a time.
(i) Firstly, we have to initialize memory as well as the output port addresses.
(ii) The following steps are repeated until all bytes are transferred:
(a) Read one byte from memory.
(b) Write that byte to output port.
(c) Increment memory address so that next byte can be transferred during
the next clock pulse.
(d) Verify if all bytes are transferred:
If yes, go to the end of step 2.
Else, wait until output port is ready for transferring the next byte. Go
to step (ii)a
Using this approach, we can transfer the data with a speed which is much less than
the maximum rate at which they can be read from the memory. Practically, there
are various transfer modes through which the data transfer between computer and
I/O device takes place with a much faster rate. These modes are as follows:
1. Programmed I/O
2. Interrupt-initiated I/O
Self-Instructional
3. Direct memory access (DMA)
232 Material
4. Dedicated processor, such as input–output processor (IOP) Input-Output Organization

5. Dedicated processor like DCP


Issue Read Issue Read CPU I/O Issue Read CPU DMA
command to CPU I/O command to Do something block command Do something
I/O module I/O module else to I/O module else NOTES

Read status Read status Interrupt Read status Interrupt


of I/O I/O CPU of I/O of DMA
module module I/O CPU modale DMA CPU
Not
Ready
Next instruction
Check Error Check Error (c) Direct memory access
status condition status condition

Ready Ready

Read word Read word


from I/O I/O CPU from I/O I/O CPU
Module Module

Write word Write word


into memory CPU memory into memory CPU memory

No No
Done? Done?

Yes Yes
Next instruction Next instruction
(a) Programmed I/O (b) Interrupt-driven I/O

Fig. 11.10 Flow Chart of Programmed I/O, Interrupt-driven I/O and DMA
Modes of Data Transfer

1. Programmed I/O
Programmed I/O operations are the results of I/O operations that are written in
the computer program. Each data transfer is controlled by an instruction set stored
in the program. When the processor has to perform any input or output instruction,
it issues a command for the appropriate I/O module that executes the given
instruction as shown in Figure 11.10(a). Processor has to continuously monitor
the status of I/O device to see whether it is ready for data transfer. Once it is
ready, I/O module performs the requested action and then setting the appropriate
bits in the I/O status register alerts the processor for further action.
2. Interrupt-initiated I/O
In programmed I/O, the processor has to check continuously till the device becomes
ready for transferring the data. It uses the interrupt facility and issues a command
that requests the interface to issue an interrupt when the device is ready for data
transfer. Here the interrupt is generated only when device is ready, and hence, till
device becomes ready, the processor can execute another program instead of
checking the device as it has to do in programmed I/O. Once processor receives
an interrupt signal [Figure 11.10(b)], it stops the current processing task and starts
I/O processing. After the completion of I/O task, it returns back to original task.
Self-Instructional
Material 233
Input-Output Organization 3. Direct Memory Access (DMA)
In direct memory access, the interface transfers the data directly to memory unit
via memory bus. The processor just initiates the data transfer by sending the starting
NOTES address and the number of bits to be transferred and proceeds with the pervious
task. When the request is granted by the memory controller, the DMA transfers
the data directly into memory [Figure 11.10(c)]. It is the fastest mode of data
transfer.
4. Input–output processor (IOP)
IOP is a special dedicated processor that combines interface unit and DMA as
one unit. It can handle many peripherals through DMA and interrupt facility.
5. Data Communication Processor (DCP)
DCP is also a special-purpose dedicated processor that is designed specially for
data transfer in network.

Check Your Progress


4. What do you understand by an I/O interface?
5. What are the various commands an I/O interface may receive?
6. List the steps involved in the transfer of data from destination to source
using the strobe technique.

11.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Direct I/O devices are those devices that interact with people. They include
the keyboard, mouse, trackball, screen, joystick, drawing tablet, musical
instrument interface, speaker and microphone. Indirect I/O devices, on the
other hand, do not interact with users and are used where humans are not
directly involved in accepting the input or producing the output, such as a
scanner or a printer.
2. A system that enables a computer to recognize the human voice is called
the voice-input system.
3. The common optical scanner devices are magnetic ink character recognition
(MICR), optical mark reader (OMR) and Optical Character Reader
(OCR).
4. An I/O interface is an entity that controls the data transfer from external
device, main memory and/ or CPU registers. We can say that it is an interface
between a computer and I/O devices (external devices) and is responsible
for managing the use of all devices that are peripheral to a computer system.
Self-Instructional
234 Material
5. There are four types of commands an I/O interface may receive: Input-Output Organization

(i) Control command


(ii) Status command
(iii) Data output command NOTES
(iv) Data input command
6. The steps involved in data transfer from destination to source using the
strobe technique are as follows:
 The destination unit activates the strobe pulse informing the source to
provide the data.
 The source provides the data by placing the data on the data bus.
 Data remains valid for some time so that the destination can receive it.
 The falling edge of strobe triggers the destination register.
 The destination register removes the data from the data bus and disables
the strobe.

11.7 SUMMARY

 The peripheral devices can be thought of as transducers which can sense


physical effects and convert them into machine-tractable data.
 A hard disk is one of the important I/O devices and is most commonly used
as permanent storage device in any processor.
 A magnetic tape consists of a plastic ribbon with a magnetic surface. The
data is stored on the magnetic surface as a series of magnetic spots.
 The human-interactive devices can be further categorized as direct and indirect.
 Input devices collect the information from the end user or from a device and
convert this information or data into a form, which can be understood by
the computer.
 A system that enables a computer to recognize the human voice is called the
voice-input system.
 Output devices are those equipment that accept data and programs from
the computer and provide them to users.
 An I/O interface is an entity that controls the data transfer from external
device, main memory and/ or CPU registers. You can say that it is an interface
between the computer and I/O devices (external devices) and is responsible
for managing the use of all devices that are peripheral to a computer system.
 The word ‘asynchronous’ means ‘not in step with the elapse of time’. In case
of asynchronous data transfer, the CPU and I/O interface are independent of
each other. Each uses its own internal clock to control its registers.

Self-Instructional
Material 235
Input-Output Organization
11.8 KEY WORDS

 Peripheral devices: They are transducers that can sense physical effects
NOTES and convert them into machine-tractable data.
 Strobe: It is single control line that informs the destination unit that a valid
data is available on the bus.

11.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What are the criteria on which you would determine that a particular device
is an input or an output?
2. What do you understand by source data automation?
3. Differentiate between impact and non-impact printing.
4. Enumerate the steps involved in I/O communication with the peripheral
devices.
5. What do you understand by the handshaking technique of data transfer?
6. What happens when an interrupt occurs?
Long Answer Questions
1. Explain the various human-interactive I/O devices.
2. Explain the interface of I/O devices with the help of a suitable diagram.
3. What do you understand by the asynchronous data transfer? Describe the
steps involved in the destination-initiated data transfer.
4. What are the advantages of an asynchronous bus transaction?

11.10 FURTHER READINGS

Mano, M. Morris. 1992. Computer System Architecture. New Delhi: Prentice-


Hall.
Mano, M. Morris. 2000. Digital Logic and Computer Design. New Delhi:
Prentice-Hall.
Mano, M. Morris. 2002. Digital Design. New Delhi: Prentice-Hall.
Stallings, William. 2007. Computer Organisation and Architecture. New Delhi:
Prentice-Hall.

Self-Instructional
236 Material
Priority Interrupt

UNIT 12 PRIORITY INTERRUPT


Structure NOTES
12.0 Introduction
12.1 Objectives
12.2 Priority Interrupt
12.2.1 Techniques of Priority Interrupt
12.2.2 Parallel Priority Interrupt
12.3 Direct Memory Access (DMA)
12.3.1 DMA Controller
12.4 Input/Output Processor (IOP)
12.5 Serial Communication
12.6 Answers to Check Your Progress Questions
12.7 Summary
12.8 Key Words
12.9 Self Assessment Questions and Exercises
12.10 Further Readings

12.0 INTRODUCTION

In this unit, you will learn in detail about the various modes of data transfer, priority.
You have already studied that data transfer is the process of using computing
techniques and technologies to transmit or transfer electronic or analog data from
one computer node to another. There are various modes of data transfer such as
interrupt-initiated I/O, direct memory access (DMA) etc. The interrupt-driven I/
O data transfer technique is based on the on-demand processing concept. In this,
each I/O device generates an interrupt only when an I/O event has to take place.
In DMA, the data is moved between a peripheral device and the main memory
without any direct intervention of the processor. Although DMA requires a relatively
large amount of hardware and is complex to implement, it is the fastest possible
means of transferring the data between peripheral device and memory.

12.1 OBJECTIVES

After going through this unit, you will be able to:


 Describe the characteristics of priority interrupt
 Understand how direct memory access (DMA) is used in data transfer
 Explain the features of an I/O processor

Self-Instructional
Material 237
Priority Interrupt
12.2 PRIORITY INTERRUPT

In the interrupt-driven I/O techniques, the processor starts data transfer when it
NOTES detects an interrupt signal which is issued when the device is ready. This helps the
processor to run a program concurrently with the I/O operations.
The interrupt-driven I/O data transfer technique is based on the on-demand
processing concept. In this, each I/O device generates an interrupt only when an
I/O event has to take place. This is like the action that has to be taken if the user
presses a key on the keyboard. The transfer is done by the service routine that
processes the required data. The interrupt handler transfers the control to this
routine. After the I/O interrupt is serviced, the processor returns the control to the
program which had been interrupted and is waiting to be executed.
Its main advantages are as follows:
 The processor does not have to wait for long for I/O modules.
 The processor does not have to repeatedly check the I/O module status.
Types of exceptions
Interrupts are nothing but just a type of exception. As far as software is considered,
there are the three following types of exceptions:
(i) Interrupts: These are raised by hardware at anytime (asynchronous).
(ii) Traps: These are raised as a result of the execution of the program, such as
division by zero. As the traps are reproduced at the same spot if the program
parameters are the same as before, they are considered to be synchronous.
(iii) System calls: Also called software interrupts, these are raised by the
operating system to provide services for performing certain common I/O
tasks, such as printing a character, opening a file, etc.
Figure 12.1 illustrates the organization of a system with a simple interrupt-
driven I/O mechanism. In most microprocessors, during I/O operation interrupt
request, IRQ, is asserted by a peripheral device requesting attention. This request
may or may not be granted.
Address bus
Data bus

Port
CPU Data register Interrupt registers
Memory Status register are read by the CPU
IVR to determine the
peripheral’s status
IRO IRO
Informs CPU that the I want attention
peripheral wants attention
Interrupt request to CPU
Self-Instructional Fig. 12.1 A Simple Interrupt-Driven I/O CPU
238 Material
Let us study the sequence of software and hardware events that occur when an Priority Interrupt

interrupt triggers and how the system handles them. The sequence is as follows:
(i) If a program requires any input or output, it lets the device controller or
device to issue an interrupt.
NOTES
(ii) User programs interact with the I/O devices through the Operating System
(OS). OS has a special region of memory reserved for it that is inaccessible
by user programs, called the kernel space. The processor is placed in the
kernel mode. It finishes the program, currently under execution, before
responding the interrupt.
(iii) The processor determines what type of interrupt it is and sends a signal of
acknowledgement to the device that issued it.
(iv) Once the interrupt signal is acknowledged, the device is allowed to remove
its interrupt signal.
(v) The processor saves the information required to continue the currently
executing program once the interrupt is over. Thus, it saves the status of the
program stored in the Program Status Word (PSW) and the location of the
next instruction as stored in the instruction counter in stack and also the
content of all registers in stacks.
(vi) Now the address of subroutine, which contains the code that does the actual
handling of that particular interrupt in the program counter, is loaded. It
does this through an interrupt vector table, which points to the interrupt-
handling routines (which store the address of the interrupt handling
subroutine). It depends on the operating system and it can be one-to-one
mapping or many interrupt-handling routines for a given interrupt. In such a
case, the processor decides which interrupt handler is to be invoked.
(vii) Then, interrupts are disabled to avoid an interrupt being interrupted.
(viii) The interrupt-handler now processes the interrupt by checking the status
information relating to the I/O operation or other event that caused interrupt.
(ix) After the interrupt processing is done, the saved register values are retrieved
from stack and restored in registers.
(x) Finally, interrupted PSW and PC of program are popped from stack and
the next instruction of the previously interrupted program is executed.
Figure 12.2 illustrates the interrupt handling in an I/O system. PSW consists of the
condition code register and reference to the code that is used by the operating
system and interrupts the processing mechanism.

Self-Instructional
Material 239
Priority Interrupt Stack before
interrupt
Normal processing Interrupt handling Stack
Interrupt
Stack processo
NOTES r
and return addr status
ess SP
TOS
Save working
registers

Stack after
Interrupt handling
routine interrupt
Stack
Restore PC and Restore working
registers SP
processor status
Return
Status
Old TOS

Fig. 12.2 Interrupt Handling in an I/O System

An I/O module handles the interrupt in the following sequence of events:


 It receives a READ command from the processor.
 It reads data from the desired peripheral into the data register.
 It interrupts the processor.
 It waits until the data is requested by the processor.
 It places the data on the data bus when requested.
The processor involvement in the I/O transaction is as follows:
 It issues a READ command.
 It performs some other useful work.
 It checks for interrupts at the end of the instruction cycle.
 It saves the current context when interrupted by the I/O module.
 It reads the data from the I/O module and stores it in memory.
 It restores the saved context and resumes execution.
It is always possible that more than one resource request for the service is
received simultaneously. In such cases, the system has to decide which request is
to be handled. It is similar to the bus arbitration technique where if more than one
device requests for bus, the decision is to be made as to which device should
access the bus.
12.2.1 Techniques of Priority Interrupt
A priority interrupt establishes a priority over the various sources to determine
which request should be entertained first if several requests arrive simultaneously.
The system may allocate a priority. Usually, a high-speed system, such as a magnetic
Self-Instructional disk, has a high priority and one that is slow in speed, such as keyboard, has a low
240 Material
priority. There are various techniques employed to decide which device to entertain Priority Interrupt

first if two devices interrupt the computer at the same time.


Polling
Polling is the technique that identifies the highest priority resource by means of NOTES
software. The program that takes care of interrupt begins at the branch address
and polls the interrupt source in sequence. The priority is determined by the order
in which each interrupt is entered. Thus, the highest priority resource is first tested
if the interrupt signal is on the control branch to the service routine. Otherwise, the
source having the next lower-priority will be tested and so on.
The disadvantage of polling is that if there are many interrupts, the time
required to poll exceeds the time available to serve the I/O device. To overcome
this problem, the hardware interrupt unit can be used to speed up the operation.
The hardware unit accepts the interrupt request and issues the interrupt grant to
the device having the highest priority. As no polling is required, all decisions are
made by the hardware unit. Each interrupt source has its own interrupt vector to
access its own service routine. This hardware unit can establish the priority either
by a serial or a parallel connection of interrupt lines.
Daisy-chaining
This method is used to establish priority by serially connecting all devices that
request an interrupt. The priority is located according to the physical position of
the device in the serial connection. As in this technique, all devices are attached
serially, the CPU issues grant signals to the closest device requesting it. . Thereby,
the one closest to the processor will have the highest priority.
12.2.2 Parallel Priority Interrupt
The parallel priority interrupt method uses a register whose bits are set separately
by the interrupt signal for each device. Priority is assigned according to the bit
value in the interrupt register and a mask register is used whose purpose is to
control the status of each interrupt request. The mask register disables a lower
priority interrupt while a higher priority device is being serviced.

Check Your Progress


1. What is polling?
2. What do you understand by the method parallel priority interrupt?

12.3 DIRECT MEMORY ACCESS (DMA)

Direct memory access (DMA) is an important data transfer technique. In DMA,


the data is moved between a peripheral device and the main memory without any
direct intervention of the processor. Although DMA requires a relatively large
amount of hardware and is complex to implement, it is the fastest possible means Self-Instructional
Material 241
Priority Interrupt of transferring the data between peripheral device and memory. It reduces the
CPU overhead as it requires no CPU involvement for continuously checking the
device status, leaving the CPU free to do other useful work. It grabs the data
buses and address buses from the CPU and uses them for transferring the data
NOTES directly between the peripheral device and memory. The CPU provides an address
on the address bus specifying the memory location from where data is to be fetched
or location where data available on data bus is to be written on memory. DMA
uses a dedicated data transfer device that reads data coming from a device and
stores it in buffer memory that can be retrieved later by the processor. The DMA
technique is particularly useful for transferring large amount of data (e.g. images,
disk transfer, etc.) to memory. The transfer of small data packets through DMA is
not considered very effective as there is lot of overhead for establishing a DMA
connection. DMA requires additional hardware, such as a DMA controller, DMA
memory partition(s) and a fast bus.

Primary Memory Primary Memory

CPU CPU

Controller Controller

Device Device

(a) Traditional I/O (b) DMA

Fig. 12.3 Comparing the Functioning of Traditional I/O and DMA

Thus, major part of CPU overhead is the time the CPU spends in reading
operation, as shown in Figure 12.3. It is allowed to use system bus when the
processor does not need it or to temporarily force the processor to suspend
operations. This suspension of the process is called cycle stealing.

AB Address bus
Bus request BR
DB Data bus High impedance
CPU (disabled)
Bus granted BG RD Read
if BG = 1
WR Write

Fig. 12.4 CPU Bus Control Signals

Self-Instructional
242 Material
To initiate a DMA transfer, the host writes a DMA command block. The Priority Interrupt

block contains a pointer to the source and destination of the transfer and the
number of the bytes to be transferred (Figure 12.4). The address of this command
block is written to the DMA controller by the CPU. Once the CPU requests, the
‘request’ bit will be set for that specific block. After DMA controller detects a NOTES
request, it starts data transfers, which gives the CPU an opportunity to perform
other tasks. Once the DMA reads all the data, only one interrupt is generated per
block and the CPU is notified about the data is available at the buffer.
On comparing DMA with programmed I/O we find that overhead is
negligible. As CPU is no longer responsible for setting up the device, checking if
the device is ready after the read operation and processing the read operation
itself, we have 0 overhead. By using DMA, the bottleneck of the read operation
will no longer be the CPU. Now the bottleneck is transferred to the PCI BUS.
Decrease in overhead results in a much higher throughput, approximately 3–5
times higher than the programmed I/O.
There are three possible ways of organizing DMA module using detached bus or
integrated bus or separate I/O bus. These ways are as follows:
(i) Single bus: Detached DMA module
 Each transfer uses bus twice, one from I/O to DMA and the other from
DMA to memory.
 Processor is suspended twice.
(ii) Single bus: Integrated DMA module
 Module may support more than one device.
 Each transfer uses bus only once, from DMA to memory.
 Processor is suspended once.
(iii) Separate I/O bus
 Bus supports all DMA enabled devices.
 Each transfer uses bus only once, from DMA to memory
12.3.1 DMA Controller
DMA requires additional hardware, called Direct Memory Access Controller
(DMAC). This is used to mimic the processor by taking control over the CPU
and allowing the transfer of information without involving the CPU. The idea is
simply that instead of interrupting the CPU with every byte of data transferred, use
a separate processor called DMA controller. This interrupts the CPU only when
the transfer of the block is complete. This is indeed more efficient than interrupting
CPU for every byte transferred. For example, for disk read operation, the controller
is provided with the address of the block of data on the disk, the destination
address in memory and the size of the data. The DMA controller is then
commanded to proceed. It is an interface chip, just like a specialized
Self-Instructional
Material 243
Priority Interrupt microprocessor, which controls the data transfer between the memory and the
peripheral device. DMAC knows how this transaction should take place. Hence,
no memory fetching is done during transfer as all instructions are available for data
transfer.
NOTES
The various functions of DMAC are as follows:
 To provide addresses for the source or destination of data in memory
 To inform the peripheral that data is needed or is ready.
It grabs the computer’s internal data and address buses during data transfer.
Hence, before the DMA starts data transfer, the CPU first sets up the DMAC’s
registers to tell about the follows:
 Whether it is read operation or write operation
 I/O device address using data lines
 Starting memory address using data lines (stored in address register)
 Number of words to be transferred, using data lines (stored in data register)
 The direction of data transfer; whether it is from device to processor or
vice versa
Thus, once the DMAC has control of the bus, it generates all timing signals that
are required for transferring the data between peripheral and memory. A real DMA
controller is a very complex device. Its configuration and interaction with processor
is shown in Figure 12.5. The various signals that the processor gives and has an
output from DMA are also shown. It has several internal registers, with at least
one to hold the address of the next memory location to access one to hold the
number of words to be transferred, shown as word count a control register and
data bus buffers. For each word transfer, the DMA increments its address registers
and decrements its word count register. DMA continues to check the request line
till word count does not become zero.
Address bus

Data bus Data bus Address bus


buffers buffers
Internal bus

DMA select DS Address register


Register select RS
Read RD Word-count register
Write WR Control
Bus request BR logic Control register
Bus granted BG
DMA request
Interrupt
DMA acknowledge to I/O device

Self-Instructional Fig. 12.5 Data Transfer Interface


244 Material
The connection between DMA and other components of computer is shown in Priority Interrupt

Figure 12.6. The CPU communicates with DMA through address and data bus as
with any interface unit. DMA has its own bus architecture. How these buses are
connected to CPU, I/O device and memory unit is also given in Figure 12.6.
Here, when BG=0, the CPU communicates with internal registers of DMAC NOTES
through RD and WR input lines, and when BG=1, RD and WR are output lines
that transfer data from DMAC to RAM, specifying read or write operation.

Fig. 12.6 DMA Transfer in a Computer System

Steps involved in the transferring data through DMAC


Following are the steps involved in data. Transferring through DMAC.
1. When a peripheral that has to perform an I/O transaction is activated by
DMAC by sending, transfer request input to it.
2. The DMA controller asserts DMArequest to CPU for requesting control of
the buses, so that CPUdoes not control bus or it is taken off-line.
3. The DMA transfer takes place when the CPU returns DMA grant to the
DMAC.
4. Bus switch 1 is opened and switches 2 and 3 are closed.
5. The DMAC provides an address of the memory to the address bus. At the
same time, the DMAC provides a transfer grant signal to the peripheral that
is then able to write to, or read from, the memory directly.
6. For each word transfer, the DMA increments its address register and
decrements its word count register. Self-Instructional
Material 245
Priority Interrupt 7. When word count register value reaches zero and the DMA operation has
been completed, it stops further transfer and removes bus request.
8. CPU is informed to terminate the bus connection by an interrupt, i.e. the
DMAC hands back the control of the bus to the CPU.
NOTES
Address bus

Data bus

Enable CPU
Bus switch 1 Bus switch 2 Bus switch 3
Enable Enable
DMA DMA

Address Data Address Data Address Data


DMA controller Peripheral
CPU Memory (DMAC) (e.g., Disk)
Address register

DMA request Byte count


DMA grant Control register
Transfer Request

Transfer Grant

Fig. 12.7 Input/Output by means of DMA

The DMA module can transfer the entire block of data at a time, directly to
or from memory, without going through the CPU. The CPU then continues with
other work. It delegates this I/O operation to the DMA module, and that module
will take care of it. When the transfer is complete, the DMA modules send an
interrupt signal to the CPU. Thus, the CPU is involved only at the beginning and
end of the transfer.
The DMA module needs to take control of the bus in order to transfer data
to and from memory. For this purpose, the DMA module must use the bus only
when the CPU does not need it, or it must force the CPU to temporarily suspend
operation. The latter technique is more common and is referred as cycle stealing
since the DMA module effectively steals a bus cycle.
Figure 12.7 shows where in the instruction cycle the CPU may be suspended.
In each case, the CPU is suspended just before it needs to use the bus. This is not
an interrupt; the CPU does not save a context and does something else. Rather,
the CPU pauses for one bus cycle. The overall effect is to cause the CPU to
execute more slowly. Nevertheless, for a multiple-word I/O transfer, DMA is far
more efficient than interrupt- driven or programmed I/O.
The sequence of events that take place in the form of a series of transactions
between the peripherals, DMAC and the CPU are as follows:
 The processor is suspended once.
 The processor then continues with other work.
Self-Instructional
246 Material
 DMA module transfers the entire block of data – one word at a time – Priority Interrupt

directly to or from memory without going through the processor.


 DMA module sends an interrupt to the processor when complete DMA
and interrupt breakpoints during the instruction cycle.
NOTES
 The processor is suspended just before it needs to use the bus.
 The DMA module transfers one word and returns control to the processor.
 Since, this is not an interrupt, the processor does not need to save context.
 The processor executes more slowly, but this is still far more efficient than
either programmed or interrupt-driven I/O.
Figure 12.8 illustrates a protocol flow chart for a DMA operation.
CPU DMA controller Peripheral

Request data transfer

Request DMA cycle

Grant DMA cycle


Time
Grant data transfer

Transfer data

Fig. 12.8 Protocol Flow Chart for a DMA Operation

12.4 INPUT/OUTPUT PROCESSOR (IOP)

Till now you have studied the various modes for data transfer which involve the
CPU. As the I/O processor (IOP) is slow and wastes maximum of processor’s
time you can deploy one or more external processors and assign them the task of
communicating directly with I/O devices without any intervention of CPU. An
input/output processor (IOP) may be classified as a processor with the direct
memory access capability that communicates with I/O device. As shown in Figure
12.9, such a processor has one memory unit and number of processor which
include CPU and one or more IOPs. IOP’s responsibility is to handle all input/
output related operations and relieve the CPU for other operations. The processor
that communicates with remote terminals like telephone or any other serial
communication media in serial fashion is called data communication processor
(DCP).
Self-Instructional
Material 247
Priority Interrupt
Memory unit

Peripheral devices
Memory bus
NOTES
PD PD PD PD

Central processing Input-output


unit (CPU) processor (IOP) I/O bus

Fig. 12.9 Block Diagram of an IOP

Figure 12.9 shows the block diagram of computer having an IOP. An IOP is just
like a CPU. It can fetch and execute its own instruction. It is designed to handle all
details of I/O processing. IOP can perform other processing tasks, such as
arithmetic, logic branching and code translations. It provides the path for data
transfer between various peripheral devices and memory unit. The CPU assigns
the task of initiating the I/O operation by testing the status of IOP. If the status is
fine, the processor continues its other works and IOP handles the I/O operation.
After the input is completed, IOP transfers its content to memory by stealing one
memory cycle from CPU. Similarly, an output is directly transferred from memory
to IOP, stealing a memory cycle and from IOP to the output device at a rate the
device accepts the output (Figure 12.10).
CPU
(1) (4) Issues instruction to IOP
IOP
(2) Interrupts when done

(3) Memory
Device to/from memory
transfers are controlled
by the IOP directly and IOP also steals memory cycles
Fig. 12.10 Data Transfer between IOP and CPU

Instructions that are used for reading from memory by an IOP are called commands
(instructions words are used as CPU instructions). The CPU informs IOP where
the command is in memory and when it is to be executed (Figure 12.11).
target device
where commands are

OP Device Address
Looks in memory for commands

Fig. 12.11 CPU Command for Memory

The command word constitutes the program for the IOP. It informs IOP what to
do, where to store data in memory, how much data transfer has taken place and
any other special request (Figure 12.12).
Self-Instructional
248 Material
Priority Interrupt
OP Addr Cnt Other

what special
to do requests NOTES
where how
to put much
data
Fig. 12.12 IOP Instruction

In most computers, a CPU acts as a master and IOP as slave. The I/O operations
are started by CPU but are executed by IOP. CPU gives the start command to
start the I/O operation after testing the status. The status words indicate the
conditions of the IOP and I/O devices, such as overload condition, device busy or
device ready status, etc. Once it finds that the status bit is OK, the CPUs send the
instruction to IOP to start the I/O transfer. The memory address received from the
instruction tells the IOP where to find the program. The CPU continues with another
program, while IOP is busy with the I/O program. Both programs refer to memory
by means of DMA transfer. The IOP interacts with CPU by means of interrupt.
Also, for ending the instruction IOP, an interrupt is sent to CPU. The CPU responds
to the interrupt by checking the IOP status to find whether the complete transfer
operation takes place with or without error.
Figure 12.13 illustrates the communication between CPU and IOP.
CPU operations
Send instruction IOP operations
to test IOP path Transfer status word
to memory location

If status O.K,
send start I/O
instruction to IOP Access memory for
IOP program

CPU continues with Conduct I/O transfers


another program using DMA
prepare status report

I/O transfer completed;


interrupt CPU

Request IOP status

Transfer status word


to memory location

Check status word


for correct transfer

Continue

Fig. 12.13 CPU–IOP Communication Self-Instructional


Material 249
Priority Interrupt
12.5 SERIAL COMMUNICATION

For data communication with a remote device, a special data communication


NOTES processor is used. The data communication processor is an IOP that distributes
and collects data from the remote terminals through telephone or other connection
lines. It is a specialized I/O processor designed to communicate directly with data
communication network. A communication network may consist of wide range of
devices, such as printer, display device, sensors, etc. Using data processor
communication, the computer can serve fragments of each network demand in an
interspersed manner. Thus, it appears it is serving many users at once. The main
difference between the IOP and DCP is that the IOP communicates with peripherals
through a common bus that consists of many data and control lines, while in DCP
each terminal is attached with a pair of wire. Thus, in IOP all peripherals use a
common bus and to transfer information to and from the processor. The DCP
communicates with each terminal through a pair of single wires. Both data and
control information are transferred in serial fashion that results in a much slower
transfer (Figure 12.14). It is DCP’s task to collect and transfer data to and from
each terminal and also to ensure that all the requests are taken care of according
to the predetermined procedure.

Computer

Root Hub

Monitor Printer

Hub Hub

Keyboard
Hub Scanner
Hub

Mouse Joystick

Microphone Speaker Speaker

Figure 12.14 An Example of Serial Transmission

Self-Instructional
250 Material
One common example of DCP is modem. It is used for establishing connection Priority Interrupt

between the computer and telephone line. As telephone lines are designed for
analog signal transfer, a modem should convert the audio signal of telephone line
to digital format for computer use and also convert the digital signal to audio signal
that is to be transmitted through communication line. NOTES
The transmission can be synchronous or asynchronous depending upon the
transmission mode of the remote terminal. The synchronous transmission does not
use start and stop bits. This is commonly used in high-speed devices to realize full
efficiency of communication link. The synchronous message is sent as a continuous
message for maintaining a synchronism. In modems, internal clocks are set to the
frequency of communication line. In this case the receiver clock has to be maintained
continuously for adjusting any frequency shift. In asynchronous transmission, on
the other hand, each character can be used separately with own start and stop bit.
The message is sent as group of bits as blocks of data. The entire blocks is
transmitted with a special control characters at the beginning and end of the block
as shown in Figure 12.15. SYNC is used for synchronous data, PID is process
ID, followed by message (packet), CRC code and EOP indicating end of block.
One function of the data communication processor is to check the transmission
errors. CRC cyclic redundancy checks a polynomial code algorithm that is used
to check the error the occurs during transmission.

SYNC PID Packet Specific Data CRC EOP

Fig. 12.15 Data Format

Check Your Progress


3. What do you understand by DMA?
4. Define the data communication processor.

12.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Polling is the technique that identifies the highest priority resource by means
of software.
2. The parallel priority interrupt method uses a register whose bits are set
separately by an interrupt signal for each device. Priority is assigned
according to the bit value in the interrupt register. A mask register is used
whose purpose is to control the status of each interrupt request. It disables
a lower priority interrupt while a higher priority device is being serviced.
3. Direct memory access (DMA) is an important data transfer technique. In
DMA, the data is moved between a peripheral device and the main memory Self-Instructional
Material 251
Priority Interrupt without any direct intervention of the processor. The DMA technique is
particularly useful for transferring large amount of data (e.g. images, disk
transfer, etc.) to memory.
4. The data communication processor is an IOP that distributes and collects
NOTES
data from the remote terminals through telephone or other connection lines.
It is a specialized I/O processor designed to communicate directly with
data communication network.

12.7 SUMMARY

 In the interrupt-driven I/O techniques, the processor starts data transfer


when it detects an interrupt signal which is issued when the device is ready.
 The interrupt-driven I/O data transfer technique is based on the on-demand
processing concept. In this, each I/O device generates an interrupt only
when an I/O event has to take place.
 Polling is the technique that identifies the highest priority resource by means
of software. The disadvantage of polling is that if there are many interrupts,
the time required to poll exceeds the time available to serve the I/O device.
 Daisy chaining is used to establish priority by serially connecting all devices
that request an interrupt. The priority is located according to the physical
position of the device in the serial connection.
 Direct memory access (DMA) is an important data transfer technique. In
DMA, the data is moved between a peripheral device and the main memory
without any direct intervention of the processor.
 An input/output processor (IOP) may be classified as a processor with the
direct memory access capability that communicates with I/O device.
 The processor that communicates with remote terminals like telephone or
any other serial communication media in serial fashion is called data
communication processor (DCP).

12.8 KEY WORDS

 Polling: It is a data transfer technique which identifies the highest priority


resource by means of software.
 Daisy-chaining: It is a data transfer technique which is used to establish
priority by serially connecting all devices that request an interrupt.
 Subroutine: It is a self-contained program ( piece of instruction code) that
may be invoked or called by the main program.

Self-Instructional
252 Material
 Direct memory access (DMA): It is a data transfer technique in which Priority Interrupt

the data is moved between a peripheral device and the main memory without
any direct intervention of the processor.
 Input/output processor (IOP): It is a processor with the direct memory
NOTES
access capability that communicates with I/O device.

12.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What are the different techniques of priority interrupt?
2. What are the various types of exceptions?
3. Define parallel priority interrupt.
4. Write a short note on the direct memory access (DMA).
Long Answer Questions
1. Discuss the various techniques of priority interrupt.
2. What is the significance of DMA? Explain.
3. What are the different possible ways of organising DMA module?
4. Explain I/O processor with the help of block diagram.

12.10 FURTHER READINGS

Mano, M. Morris. 1992. Computer System Architecture. New Delhi: Prentice-


Hall.
Mano, M. Morris. 2000. Digital Logic and Computer Design. New Delhi:
Prentice-Hall.
Mano, M. Morris. 2002. Digital Design. New Delhi: Prentice-Hall.
Stallings, William. 2007. Computer Organisation and Architecture. New Delhi:
Prentice-Hall.

Self-Instructional
Material 253
Memory
BLOCK V
MEMORY ORGANIZATION

NOTES
UNIT 13 MEMORY
Structure
13.0 Introduction
13.1 Objectives
13.2 Memory Hierarchy
13.3 Main Memory
13.3.1 RAM
13.3.2 ROM
13.4 Auxiliary Memory
13.5 Associative Memory
13.6 Answer to Check Your Progress Questions
13.7 Summary
13.8 Key Words
13.9 Self Assessment Questions and Exercises
13.10 Further Readings

13.0 INTRODUCTION

In this unit, you will learn about the various types of memory and their hierarchy.
The computer memory is an essential part of a computer system. Memory can be
divided into two types, primary memory and secondary memory. The main memory
communicates directly with the CPU. The secondary memory communicates with
the main memory through the I/O processor. The main memory is of two types—
RAM and ROM. You will also learn about the purpose of the different auxiliary
memories used in a computer system and the concept of associative memory.

13.1 OBJECTIVES

After going through this unit, you will be able to:


 Define memory hierarchy
 Explain main memory and its functions
 Discuss auxiliary and associative memory

13.2 MEMORY HIERARCHY

The memory hierarchy consists of the total memory system of any computer. The
memory components range from higher capacity slow auxiliary memory to a
Self-Instructional
254 Material
relatively fast main memory to cache memory that can be accessible to the high Memory

speed processing logic. A five-level memory hierarchy is shown in Figure 13.1.


At the top of this hierarchy, there is a Central Processing Unit (CPU) register
which is accessed at full CPU speed. This is local memory to the CPU as the CPU
NOTES
requires it. Next comes cache memory which is currently on the order of 32 KB
to few megabytes. After that, is the main memory with sizes currently ranging from
16 MB for an entry level system to few gigabytes at the other end. Next are
magnetic disks, and finally we have magnetic tape and optical tapes.
The memory, as we move down the hierarchy, mainly depends on the
following three key parameters:
 Access Time
 Storage Capacity
 Cost
 Access Time: CPU registers are the CPU’s local memory and are accessed
in a few nanoseconds. Cache memory takes a small multiple of CPU
registers. Main memory access time is typically few tens of nanoseconds.
Now comes a big gap, as disk access time are at least 10 Millisecond/
(msec) and tapes and optical disk access may be measured in seconds if
the media is to be fetched and inserted into a drive.
 Storage Capacity: The storage capacity increases as we go down the
hierarchy. CPU registers are good for 128 bytes. Cache memories are a
few Megabytes (MB). The main memory are 10 to thousands MB. Magnetic
disk of capacities are few Gigabytes (GB) to tens of GB. The capacity of
tapes and optical disks are limited as they are usually kept offline.
Registers
Cache

Main Memory

Magnetic Disk

Tape Optical Disk

Fig. 13.1 Five-level Memory Hierarchy

The main memory is at the central place as it can communicate directly with
the CPU and through the Input/Output or I/O processor with the auxiliary devices.
Cache memory is placed in between the CPU and the main memory.
Cache usually stores the program segments currently being executed in the
CPU and temporary data frequently asked by the CPU in the present calculations.
The I/O processor manages the data transfer between the auxiliary memory and
the main memory. The auxiliary memory has usually a large storing capacity but
has low access rate as compared to the main memory and hence, is relatively
Self-Instructional
Material 255
Memory inexpensive. Cache is very small but has very high access speed and is relatively
expensive. Thus, we can say that
Access speed  Cost

NOTES
13.3 MAIN MEMORY

The memory unit that communicates directly with the CPU is called main memory.
It is relatively large and fast and is basically used to store programs and data
during computer operation. The main memory can be classified into the following
two categories:
13.3.1 RAM
The term, Random Access Memory (RAM), is basically applied to the memory
system that is easily read from and written to by the processor. For a memory to
be random access means that any address can be accessed at any time, i.e., any
memory, location can be accessed in a random manner without going through any
other memory location. The access search time for each memory location is same.
The two main classifications of RAM are Static RAM (SRAM) and Dynamic
RAM (DRAM).
Static RAM or SRAM
Static RAM is made from an array of flip-flops where each flip-flop maintains a
single bit of data within a single memory address or location.
SRAM is a type of RAM that holds its data without external refresh as long
as power is supplied to the circuit. The word ‘static’ indicates that the memory
retains its content as long as power is applied to the circuit.
Dynamic RAM or DRAM
Dynamic RAM is a type of RAM that only holds its data if it is continuously
accessed by special logic called refresh circuit. This circuitry reads the contents of
each memory cell many hunderds of times per second to find out whether the
memory cell is being used at that time by computer or not. Due to the way in
which the memory cells are constructed, the reading action itself refreshes the
contents of the memory. If this is not done regularly, then DRAM will lose its
contents even if it continues to have power supplied to it. Because of this refreshing
action, the memory is called dynamic.
13.3.2 ROM
In every computer system, there is a portion of memory that is stable and impervious
to power loss. This type of memory is called Read Only Memory or in short
ROM. It is non-volatile memory, i.e., information stored in it is not lost even if the
power supply goes off. It is used for permanent storage of information and it
possesses random access property.
Self-Instructional
256 Material
The most common application of ROM is to store the computer’s Basic Memory

Input-Output System (BIOS). Since the BIOS is the code that tells the processors
to access its resources on powering up the system. Another application is the
code for embedded systems.
NOTES
There are different types of ROMs. They are as follows:
 PROM or Programmable Read Only Memory: Data is written into a
ROM at the time of manufacture. However, the contents can be programmed
by a user with a special PROM programmer. PROM provides flexible and
economical storage for fixed programs and data.
 EPROM or Erasable Programmable Read Only Memory: This allows
the programmer to erase the contents of the ROM and reprogram it. The
contents of EPROM cells can be erased using ultra violet light using an
EPROM programmer. This type of ROM provides more flexibility than
ROM during the development of digital systems. Since they are able to
retain the stored information for longer duration, any change can be easily
made.
 EEPROM or Electrically Erasable Programmable Read Only
Memory: In this type of ROM, the contents of the cell can be erased
electrically by applying a high voltage. EEPROM need not be removed
physically for reprogramming.

13.4 AUXILIARY MEMORY

Secondary storage, also known as external memory or auxiliary storage, differs


from primary storage in that it is not directly accessible by the CPU. The computer
usually uses its input/output channels to access secondary storage and transfers
the desired data using intermediate area in primary storage. Secondary storage
does not lose the data when the device is switched off and hence it is non-volatile.
Some examples of secondary storage technologies are Flash memory, Universal
Serial Bus (USB) Flash drives or keys, floppy disks, magnetic tape, paper tape,
punched cards, standalone RAM disks and Iomega Zip drives.
The storage devices that provide backup storage are called auxiliary memory.
Magnetic Disk
Magnetic disks are circular metal plates coated with magnetized material on both
sides. Several disks are stacked to a spindle one below the other with a read/write
head to make a disk pack. The disk drive consists of a motor and all disks rotate
together at very high speed. Information is stored on the surface of a disk along
concentric sets of rings called tracks. These tracks are divided into sections called
sectors. A set of corresponding tracks in all surfaces of a disk pack is called
cylinder. Thus, if a disk pack has n plates, there are 2n surfaces, hence the number
of tracks per cylinder is 2n. The minimum quantity of information, which can be
Self-Instructional
Material 257
Memory stored, is a sector. If the number of bytes to be stored in a sector is less than the
capacity of the sector, the rest of the sector is padded with the last type recorded.
Figure 13.2 shows a magnetic disk memory.
Spindle
NOTES Surface 1

Surface 2
Read/write
head
Surface 3

Surface 4

Cylinder

Surface 2n

Fig. 13.2 Magnetic Disk

The subdivision of a disk surface into tracks and sectors is shown in Figure 13.3.

Sector
Tracks

Read/Write
head

Fig. 13.3 Surface of a Disk

Suppose s bytes are stored per sector, there are p sectors per track, t
tracks per surface and m surfaces. Then, the capacity of disk will be defined as
Capacity = m × t × p × s bytes
If d is the diameter of the disk, the density of recording is
 p  s
Density = = bytes/inch
Self-Instructional
  d 
258 Material
A set of disk drives are connected to a disk controller. The disk controller Memory

accepts commands and positions the read/write heads for reading or writing. When
the read/write command is received by the disk controller, the controller first
positions the arm so that the read/write head reaches the appropriate cylinder.
The time taken to reach the appropriate cylinder is known as Seek time (Ts). The NOTES
maximum seek time is the time taken by the head to reach the innermost cylinder
from the outermost cylinder or vice versa. The minimum seek time will be 0 if the
head is already positioned on the appropriate cylinder. Once the head is positioned
on the cylinder, there is further delay because the read/write head has to be
positioned on the appropriate sector. This is rotational delay also known as Latency
time (Tl). The average rotational delay equals half the time taken by the disk to
complete one rotation.
Floppy Disk
A floppy disk, also known as diskette, is a very convenient bulk storage device
and can be taken out of the computer. It can be either 5.25" or 3.5" size, the 3.5"
size being more common. It is contained in a rigid plastic case. The read/write
heads of the disk drive can write or read information from both sides of the disk.
The storage of data is in the magnetic form, similar to that in hard disk. The 3.5"
floppy disk has storage up to 1.44 Mbytes. It has a hole in the centre for mounting
it on the drive. Data on the floppy disk is organized during the formatting process.
The disk is organized into sectors and tracks. The 3.5" high-density disk has 80
concentric circles called tracks and each track is divided into 18 sectors. Tracks
and circles exist on both sides of the disk. Each sector can hold 512 bytes of data
plus other information like address, etc. It is a cheap read/write bulk storage device.
Magnetic Tapes
Magnetic disk is used by almost all computer system as a permanent storage
device; however, magnetic tape is still a popular form of low-cost magnetic storage
media and it is primarily used for backup storage purposes. The standard backup
magnetic tape device used today is Digital Audio Tape (DAT). These tapes provide
approximately 1.2 Gbytes of storage on a standard cartridge-size cassette tape.
These magnetic tape memories are similar to that of audio tape recorders.
A magnetic tape drive consists of two spools on which the tape is wounded.
Between the two spools, there is a set of nine magnetic heads to write and read
information on the tape. The nine heads operate independently and record
information on nine parallel tracks, parallel to the edge of the tape. Eight tracks are
used to record a byte of data and the ninth track is used to record a parity bit for
each byte. The standard width of the tape is half an inch. The number of bits per
inch (bpi) is known as recording density.
Normally, when data is recorded into the tape, a block of data is recorded
and then a gap is left and then another block is recorded and so on. This gap is
known as Inter-Block Gap (IBG). The blocks are normally 10 times long as that
Self-Instructional
Material 259
Memory of IBG. The beginning of the tape (BOT) is indicated by a metal foil known as
marker and the End Of Tape (EOT) is also indicated by a metal foil known as end
of tape marker.
The data on the tape is arranged as blocks and cannot be addressed. They
NOTES
can only be retrieved sequentially in the same order in which they are written.
Thus, if a desired record is at the end of the tape, earlier records have to be read
before it is reached and hence, the access time is very high as compared to magnetic
disks.
Optical Disks
Optical disk storage technology provides the advantage of high volume and
economical storage with somewhat slower access times than traditional magnetic
disk storage.
CD-ROM
Compact Disk-Read Only Memory (CD-ROM) optical drives are used for the
storage of information that is distributed for read-only use. A single CD-ROM can
hold up to 800 MB of information. Software and large reports distributed to a
large number of users are good candidates for this media. CD-ROM is also more
reliable for distribution than floppy disks or tapes. Nowadays, almost all software
and documentations are distributed only on CD-ROM.
In CD-ROMs the information is stored evenly across the disk in segments
of the same size. Therefore, in CD-ROMs, data stored on a track increases as we
go towards the outer surface of disk and hence, CD-ROMs are rotated at variable
speeds for the reading process.
Erasable Optical Disk
Recent development in optical disks is the erasable optical disks. They are used
as an alternative to standard magnetic disks when speed of the access is not
important and the volume of the data stored is large. They can be used for image,
multimedia, a high volume, low activity backup storage. Data in these disks can be
changed as repeatedly as in a magnetic disk. The erasable optical disks are portable
and highly reliable and have longer life. They use format that makes semi-random
access feasible.
Check Your Progress
1. Where is cache memory located in the memory hierarchy?
2. Write the function of I/O processor.
3. Write the purpose of RAM.
4. What is dynamic RAM?

Self-Instructional
260 Material
Memory
13.5 ASSOCIATIVE MEMORY

An associative memory, also called content-addressable memory (CAM), is a


very high speed memory that provides a parallel search capability. It is capable of NOTES
searching the contents of all its locations at any instant of time. An associative
memory checks all data stored in it simultaneously, with a particular match pattern,
i.e., here content is matched rather than address as done in random access memory.
Hence, each word in such a memory should include a circuit that can do a pattern
comparison. These memories involve a complex and advance circuitry making it
more expensive than the conventional memories. It is used for the special purposes
requiring high speed. However, there are areas within high-performance
architectures, such as cache and virtual memory management, where content-
addressable memories play a critical role, and their cost can easily be justified.
The general structure of an associative memory is shown in Figure 13.4. It
consist of a set of words, each having n–1 bits. When a word is written in it, no
address is given. The memory is capable of finding an empty space to store the
word in it. When memory has to read any data the content of word or the part of
word is specified. This specified word or part of word is used for pattern matching.
Usually such a memory stores large word sizes, often of size 100 bits or more.
However, the number of words is limited because more the words, the more are
circuitry requirements and high is the price.

DATA

MASK TAGS

WORD n – 1

WORD 1
WORD 0

Fig. 13.4 An Associative Memory

Two registers are used with CAM — a MASK register, also called key register,
and a data register, also called argument register. The size of register is same as
one word stored in associative memory. In addition, each word has a circuit to
perform the comparison operation. Corresponding to each word, one or more tag
bits are associated. Each set of tag bits forms a bit-slice register which has same
size as that of number of words in the CAM. Self-Instructional
Material 261
Memory
Argument Register (A)

NOTES Key Register (K)

Match Register

Input
Associative Memory
Array and Logic
Read M
m Words
n Bits per Word
Write

Output

Fig. 13.5 Block Diagram of an Associative Memory

In Figure 13.5, the associative memory can be considered an array of M words


with each having n bits, there are read and write signals controlling the operation
and n input bit holding the word to be written. The collection of tag flip-flop form
Match register M of m size.

A1 Aj An

K1 Kj Kn

Word 1 C11 C1j C1n M1

Word i Ci1 Cij Cin Mi

Word m Cm1 Cmj Cmn Mm

Bit 1 Bit j Bit n

Fig. 13.6 Associative Memory of m Words (each of n size)

Here each word is matched in parallel with the content of the argument register.
The bit corresponding to each word of memory in M holds the match status.
Once matching process is done, those bits in match register will be set corresponding
to which word in associative memory has been matched. We can use some portion
of argument register by using a mask in key register. The entire argument is compared
with each memory word if the key register contains all 1’s. Else only those bits in
Self-Instructional
the argument that have 1's the corresponding bit in key register are 1. Thus, the
262 Material
key provides mask or identifying piece of information which specifies how memory Memory

reference is to be made.
Let as consider an example where the argument register A and the key register
have bit configuration as:
NOTES
A 10101010
B 00001111
Now these two registers set the search pattern to 1010 in last four bits. For all
words that contain the pattern 1010 in last four bits will set the match bit. Lets us
consider the following three words and try to find their match status for the above
pattern.
Word1 10101111 no match
Word2 11111010 match
Word3 10101011 no match
Here only word 2 sets match status. Thus, in this case when the CAM performs a
search, the logic associated with pattern, i.e., the presence of 1010 pattern in the
last four bit (selected pattern) will be compared with each word. The tag bit for
the corresponding word will be set to one if the match is found. Thus, tag bit for
word 2 will be set. At the end of this process, all matching words may be identified
by their tag bits. In case of system that supports more complex operations Often
more than one tag bit is used.
Aj Kj
Input

Write

R S

Fij Match
Read To Mi
Logic

Output

Fig. 13.7 One Cell of Comparator Circuit


Self-Instructional
Material 263
Memory The high speed of the associative memory is achieved by performing the
matching operation simultaneously for all words stored in the associative memory.
Hence, a comparator is required with every word in the memory so that all the
comparisons are done in parallel with help of the above circuit shown in Figure 13.7.
NOTES
An associative memory has an n-bit input but not necessarily all possible 2n
combinations are present. The n-bit address input is a tag stored in argument
register that is compared with a tag field in each location simultaneously. The
above circuit in Figure 13.7 is one cell of associative memory. If the input tag
matches a stored tag, the data associated with that location is output. Otherwise
the associative memory produces a miss output. An associative memory does not
have explicit address as other memory like RAM, ROM or hard disk has, rather
data itself act as address as for knowing the information we have check the content,
i.e., whether that data is stored somewhere or not. Cache memory is usually an
associative memory. For data searching, associative mapping is used.
After the search operation is complete, one or more than one or nil tag can
show the match status. Apart from the main search operation, the other common
operations performed on associative memory are READ and WRITE. If only one
word is matched, a required READ operation may be performed to transfer data
from the selected location. If no match is found, an error signal will be returned. In
case of more than one match, the READ operation will select any one of matched
words, read it and clear its tag bit. The similar operation will be preformed for the
successive READs, such that all of the matched words are accessed. An associative
memory is dynamic memory and hence should also have a write capability for
storing the information to be searched.
Some other operations associated with associative memories are:
Count Matches: To let user know how much matches are there, i.e., how many
tag bits are set. As said earlier, it can be zero, one, or many. However, it is difficult
to know the exact count if there are more than one match.
Masked Write: To write only on the selected bits which are given in mask register
on the matched data, i.e., the word whose tag bit is marked, other bits remaining
unchanged.
Multiwrite: To write simultaneously to all those words whose data are matched.
Store: To load new data into memory by writing a data word at any empty location,
rather than the already written one.
Delete: Unwanted words can be deleted or replaced by new words. If it is
required to delete a word in order to create space to store new word, the obvious
choice is an inactive word. The additional tag bits are used for differentiating
between active and inactive word.
Address Operations: To determine the coordinate address of a tagged word, or
read or write by address.
Tag Operations: To set, clear or read a tag register, or copy among multiple tag
Self-Instructional registers.
264 Material
Other than cache memory another common use of associative memory is to hold Memory
page map table in case of virtual memory.

Virtual Address
31 1211 0
NOTES

Virtual Page No. Byte in Page

Physical 31 12
Page Address 0
1
2
3
4

4095
Associative Memory

Figure 13.8 Associative Memory

Acceleration of Virtual Address Translation (VAT)


The virtual memory system requires at least two memory accesses, one for searching
the page in the page map table and the other for actual data fetching. In order to
reduce access time we can store the page map table in associative memory instead
of main memory. Hence, the access time is reduced by two if page map table is
kept in main memory. We can use an associative memory, called a Translation
Lookaside Buffer (TLB), for this purpose. Since associative memory is very
expensive, it is often not feasible to store complete mapping information in it.
Hence, we keep a small amount of recently used pages in the associative memory
and the complete copy of page map table in the main memory. So, if the recently
used page is used again, like is quiet probable based on the principle of locality,
the associative memory fetches the translation and inserts it directly into the physical
address, avoiding the references made to main memory.
As it is not possible to store the complete table in TLB, there are chances
of misses in TLB. In case of a miss, the page map table stored in main memory is
to be accessed to generate the address. This address is also stored in TLB as a
new entry. A TLB is typically fully associative and is of small size that can store ten
to a few thousand entries. These entries can help in translation of address to a
large number of locations (e.g., 4K). Normal exploitation of memory locality
ensures that entries in the TLB change much less frequently than cache entries.
Typical hit rates in a TLB are ‘approximately 45 per cent. Also, when a miss
occurs, an extra time of about 10 to 40 clock cycles is wasted for retrieving data
from the main memory. So, if there is 1 in 100 accesses miss ratio in the TLB and
the penalty is 40, then the processor slows down by nearly a third.
Self-Instructional
Material 265
Memory Associative memory is also very commonly used in database system for
fast retrieval of information by feeding some data like getting the information about
by customer by feeding the customer ID. Now search in case of associative memory
will be content based, i.e., on the customer ID.
NOTES
Hit Miss
VA PA
TLB Main
CPU Lookup Cache Memory

Miss Hit

Translation

Data

1
1 Physical Memory
1
1
0
1

1
1
1 Disk Storage
1
0
1
1
0
1
1
0
1

Fig. 13.9 Use of TLB

Check Your Progress


5. What is an associative memory?
6. What is the purpose of using associative memory?

13.6 ANSWER TO CHECK YOUR PROGRESS


QUESTIONS

1. Cache memory is placed between the CPU and the main memory in memory
hierarchy.
2. The function of I/O processor is to manage the data transfer between the
auxiliary memory and the main memory.
Self-Instructional
266 Material
3. The purpose of RAM is to store data and applications that are currently in Memory

use by the processor.


4. Dynamic RAM is a type of RAM that only holds its data if it is continuously
accessed by special logic called refresh circuit.
NOTES
5. An associative memory, also called content-addressable memory (CAM),
is a very high speed memory that provides a parallel search capability. It is
capable of searching the contents of all its locations at any instant of time.
6. Associative memory is used for the special purposes requiring high speed.

13.7 SUMMARY

 The memory hierarchy consists of the total memory system of any computer.
The memory components range from higher capacity slow auxiliary memory
to a relatively fast main memory to cache memory that can be accessible to
the high speed processing logic.
 The memory unit that communicates directly with the CPU is called main
memory. It is relatively large and fast and is basically used to store programs
and data during computer operation.
 The two main classifications of RAM are Static RAM (SRAM) and Dynamic
RAM (DRAM).
 Dynamic RAM is a type of RAM that only holds its data if it is continuously
accessed by special logic called refresh circuit.
 In every computer system, there is a portion of memory that is stable and
impervious to power loss. This type of memory is called Read Only Memory
or in short ROM. It is non-volatile memory, i.e. information stored in it is
not lost even if the power supply goes off.
 Secondary storage, also known as external memory or auxiliary storage,
differs from primary storage in that it is not directly accessible by the CPU.
 An associative memory, also called content-addressable memory (CAM),
is a very high speed memory that provides a parallel search capability. It is
capable of searching the contents of all its locations at any instant of time.

13.8 KEY WORDS

 Main memory: Communicates directly with the CPU and with the auxiliary
devices through the I/O processor.
 RAM: Main memory of a computer system.
 DRAM: A type of RAM that only holds its data if it is continuously accessed
by special logic called refresh circuit.

Self-Instructional
Material 267
Memory
13.9 SELF ASSESSMENT QUESTIONS AND
EXERCISES

NOTES Short Answer Questions


1. Name the five-levels in memory hierarchy.
2. Differentiate between RAM and ROM.
3. What is associative memory?
4. Discuss the significance of associative memory.
Long Answer Questions
1. Explain the various parameters on which memory hierarchy is based.
2. Explain the concept of memory hierarchy system with the help of examples
and illustrations.
3. Describe main memory and its functions.
4. What is auxiliary memory? Explain.

13.10 FURTHER READINGS

Mano, M. Morris. 1992. Computer System Architecture. New Delhi: Prentice-


Hall.
Mano, M. Morris. 2000. Digital Logic and Computer Design. New Delhi:
Prentice-Hall.
Mano, M. Morris. 2002. Digital Design. New Delhi: Prentice-Hall.
Stallings, William. 2007. Computer Organisation and Architecture. New Delhi:
Prentice-Hall.

Self-Instructional
268 Material
Memory Organization

UNIT 14 MEMORY ORGANIZATION


Structure NOTES
14.0 Introduction
14.1 Objectives
14.2 Cache Memory
14.3 Virtual Memory
14.4 Memory Management Hardware
14.5 Answers to Check Your Progress Questions
14.6 Summary
14.7 Key Words
14.8 Self Assessment Questions and Exercises
14.9 Further Readings

14.0 INTRODUCTION

In this unit, you will learn about the cache memory and virtual memory. Cache
memory is defined as a very high speed memory that is used in a computer system
to compensate the speed differential between the main memory access time and
the processor logic. A very high speed memory called cache is used to increase
the speed of processing by making the current programs and data available to the
CPU at a rapid rate. It is placed between the CPU and the main memory. The
virtual memory is a concept that permits the user to construct a program with size
more than the total memory space available to it. This technique allows user to use
the hard disk as if it is a part of main memory. You will also learn about the memory
management hardware.

14.1 OBJECTIVES

After going through this unit, you will be able to:


 Explain cache memory
 Explain the features of a virtual memory system
 Discuss the memory management hardware

14.2 CACHE MEMORY

The cache is a small, fast memory placed between the CPU and the main memory.
The system performance can improve dramatically by using cache memory at a
relatively lower cost. The word cache is derived from the French word that means
hidden. It is named so because the cache memory is hidden from the programmer
and appears as if it is a part of the system’s memory space. It improves the speed
because of its very high-speed and rapidly been accessed by the processor, with
Self-Instructional
Material 269
Memory Organization a fetch cycle time comparable to speed of CPU. The whole concept of using
cache memory is based on the principle of hierarchy and locality of reference.
This results in an overall increase in the speed of the system. In a system that uses
a tiny 512 MB cache memory and RAM of 2 GB, it is observed that the processor
NOTES accesses to the cache 95 per cent more than RAM. The initial microprocessors
had truly tiny cache memories; for example, 32 bytes. But in the early 1990s, the
cache sizes of 8 KB to 32 KB became common. By the end of the 1990s, multilevel
cache configuration became common.The multilevel chip has one cache of capacity
up to 128 KB internal on the chip and other is external to chip and form second
level caches having capacity up to 1 MB.
In Figure 14.1, it can be seen that the cache memory is attached to both the
processor as well as main memory in parallel via address and data buses. This is
done so that data consistency is maintained in both cache and the main memories.
Data Data Bus

Address Address Bus Main Store

CPU

Typically
The address form the Hit 64M - 4 Gbytes
Cache Cache
CPU Interrogates both
Controller Memory
the cache and main
memory.
Typically
64K to 512 Mbytes
If the data is in the cache, it is fetched
from there rather than the main store.

Fig. 14.1 Cache Memory Organization

According to the principle of memory hierarchy, the complete program resides on


the hard disk and a few active pages of the current process (in case of large
programs) reside in the main memory. A small part of the main memory is copied
to the cache. It is the role of cache controller to determine whether the data desired
by the processor resides in the cache memory or it is to be obtained from the main
memory. The processor generates the address of a word to be read and sends it
to address bus. The cache controller fetches the address and matches it with the
content of cache. If the desired data is found in the cache, a Hit signal is generated
and the word is delivered to the processor. However, if the data does not exist in
cache, a MISS signal is generated and the data is searched in main memory. If
data is found in main memory, it is delivered to the processor and is also
simultaneously loaded into the cache. If data is not found in main memory, it is
fetched from hard disk, as in case of virtual memory technique.

14.3 VIRTUAL MEMORY

As you know, all data is stored in the hard disk and the program that is under the
execution resides in the main memory. The virtual memory is a concept that permits
the user to construct a program with size more than the total memory space available
Self-Instructional to it. This technique allows user to use the hard disk as if it is a part of main memory.
270 Material
Hence with this technique, a program with size even larger than the actual physical Memory Organization
memory available can execute. Here the only thing required is an address mapping
from virtual address to physical address in main memory. An address generated by
CPU during execution of program is called virtual address and the set of such
addresses is address space. An address in the main memory is called physical address NOTES
and the set of these addresses is called memory space. A virtual memory system
provides a mechanism for translating a program generated address by the processor
into main memory location. A program uses the virtual memory addresses space
which stores data and instruction. In usual case, the address space is more than
memory space, where actually manipulation has to be done. If there is a main memory
of capacity of 32K words, 15 bits are required to specify the physical address of
memory. Let the system have auxiliary memory of 1MB size and it will require 20
bits (i.e., address bit) to access the data. As said earlier, in the virtual memory system,
a mapping from a virtual address space to a physical address space is required.
System uses a table that maps a virtual address of 20 bit to physical address of 15
bits. This process is required for translation of every word (Figure 14.2).

Name Space Logical


Name

Virtual Logical Address Space


Address

Physical Address Space Physical


Address

Figure 14.2 Translation of Logical Address to Physical Address

In Figure 14.3, a relationship between virtual address and physical address is


shown. By using address translation, we calculate the physical location of data in
main memory. It can be seen from the figure that the virtual address space is more
than the physical address space.
Virtual Physical
Addresses Address Addresses
Translation

Fig. 14.3 Mapping from Virtual Address to Physical Address


This technique is especially useful for a multiprogramming system where
Self-Instructional
more than one program resides in main memory. Such a system is managed Material 271
Memory Organization efficiently with the help of operating system. The objective of virtual memory is
to have maximum possible portion of program in the main memory and the
remaining portion of program to reside on the hard disk. The operating system,
with some hardware support, swaps data between memory and disk such that it
NOTES interferes minimum with the running of the program. The operating system manages
the whole memory and it will be discussed in detail in unit 7 as the memory
management techniques. If it is required to refer to hard disk very frequently such
that swapping in and out of data between the main memory and hard disk becomes
the dominant activity. Such condition is referred to as thrashing, and it reduces
the efficiencies of the system greatly. Virtual memory can be thought as a way to
provide an illusion to the user that disk is an extension of main memory.
Any program under execution should reside in the main memory as CPU
cannot directly access hard disk. The main memory usually starts at physical
address 0. Certain memory locations are reserved for the special-purposes
program, such as the operating system. It can be either at the low addresses or
other high end. However, it is usually at the low end. The rest of main memory is
divided into pieces where different programs reside. Now a days most operating
systems have a multiprogramming environment, i.e, there are more than one
programs that reside in main memory. Different processes are mapped to different
physical locations in the main memory.

14.4 MEMORY MANAGEMENT HARDWARE


In a multiprogramming environment, many programs reside in the memory. Hence,
it becomes an important function of the operating system to manage the cache and
virtual memory efficiently. The relationship between cache and main memory is
same as relationship between the virtual memory stored on disk and the main
memory. If a data to be retrieved is not available in a particular level, then it is
required to search one higher level in memory hierarchy to find the data which is
significantly slower. If the data is not found in the particular level, it is called a
cache/virtual memory miss. Thus, if we want to access a block in cache and the
block is present there, we call it a hit, else a miss. Similarly, when we try to access
a page in main memory and the page is present there, we call it a page hit or else
page fault. Eventually in all systems, a cache system and a virtual memory system
run side-by-side. If the block is not found in cache, then it is retrieved from main
memory and stored in the cache. Then the memory operation can proceed.
Similarly, if the page is not found in the main memory, it is retrieved from disk and
written to the main memory and then the process proceeds. These misses or page
faults adversely impact memory performance via the following mechanisms:
Replacement cost: When a cache misses or virtual memory page fault occurs,
the processor or memory management unit must:
 Find the requested block or page in the lower-level store
 Check the modify bits that reflect whether the particular page or cache
data has been modified during the operation
Self-Instructional
272 Material
 Write the block or page back to the cache or main memory Memory Organization

 Set appropriate bit showing the regency of the replacement


These operations result in overhead which would not have occurred if the
memory operation had been successful, i.e., miss or page fault had not occurred.
NOTES
It is crucial that page replacement causes the least possible number of misses or
page faults.
In order to make memory access efficient under the assumption that system is
using cache or virtual memory (or both), we need to correctly design and implement
the buffering function in cache or virtual memory. This can be done by making the
computationally intensive parts of the buffering process as efficient as possible.
Thus, the main jobs of a memory management unit are:
 Converting a logical address to a physical address.
 Fetching the required page and providing a facility for dynamic storage
relocation that maps logical memory reference to physical memory reference
(in case of page fault).
 Making provision for sharing of common programs stored in memory by
different users so that memory can be managed efficiently by segmentation.
 Protecting data from unauthorized user
 Protecting operating system from user against any change
Check Your Progress
1. Why is cache used?
2. Define virtual memory.
3. What is hit and miss?

14.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The cache is used for storing program segments currently being executed in
the CPU and for the data frequently used in the present calculations.
2. The virtual memory is a concept that permits the user to construct a program
with size more than the total memory space available to it. This technique
allows user to use the hard disk as if it is a part of main memory.
3. If we want to access a block in cache and the block is present there, we call
it a hit, else a miss.

14.6 SUMMARY
 The cache is a small, fast memory placed between the CPU and the main
memory. The system performance can improve dramatically by using cache
memory at a relatively lower cost.
Self-Instructional
Material 273
Memory Organization  It is the role of cache controller to determine whether the data desired by
the processor resides in the cache memory or it is to be obtained from the
main memory.
 The virtual memory is a concept that permits the user to construct a program
NOTES with size more than the total memory space available to it. This technique
allows user to use the hard disk as if it is a part of main memory. Hence with
this technique, a program with size even larger than the actual physical
memory available can execute.
 The objective of virtual memory is to have maximum possible portion of
program in the main memory and the remaining portion of program to reside
on the hard disk.
 When we try to access a page in main memory and the page is present
there, we call it a page hit or else page fault.

14.7 KEY WORDS

 Cache: A very high speed memory used to increase the speed of processing
by making the current programs and data available to the CPU.
 Virtual memory: A technique that allows the execution of processes that
may not be completely in the memory.

14.8 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. Define cache memory.
2. What is the function of cache controller?
3. What do you understand by virtual memory?
Long Answer Questions
1. Discuss the significance of cache memory.
2. Explain the process of address translation in virtual memory.
3. Explain the term memory management hardware.

14.9 FURTHER READINGS


Mano, M. Morris. 1992. Computer System Architecture. New Delhi: Prentice-
Hall.
Mano, M. Morris. 2000. Digital Logic and Computer Design. New Delhi:
Prentice-Hall.
Mano, M. Morris. 2002. Digital Design. New Delhi: Prentice-Hall.
Stallings, William. 2007. Computer Organisation and Architecture. New Delhi:
Self-Instructional Prentice-Hall.
274 Material
.emaN e1.sruIncrease
oC eht fothe
ezifont
s tnosize
f ehof
t esthe
aerCourse
cnI .1 Name.


.egaP revoC e2.ht nuse
i rethe
daefollowing
h a sa gniwasolaloheader
f eht esin
u the
.2 Cover Page.

YTISREVINUALAGAPPA
APPAGALAUNIVERSITY
Master of Computer Applications


elcyC drihT eht ni )46.3:APGC( CA[Accredited
AN yb edarGwith
’+A’’A+’
htiwGrade
detidby
ercNAAC
cA[ (CGPA:3.64) in the Third Cycle
]CGU-DRHM yb ytisrevinU I–yrogeand
taC Graded
sa dedarasG Category–I
dna University by MHRD-UGC]
300 036 – IDUKIARA
KARAIKUDI
K – 630 003
315 11 NOITACUDE ECNATSIDDIRECTORATE
FO ETAROTCEOF
RIDDISTANCE EDUCATION

DIGITAL COMPUTER ORGANIZATION


I - Semester

DIGITAL COMPUTER ORGANIZATION


Master of Computer Applications
315 11

itnem sYou
a egaare
p reinstructed
voc eht etatodpupdate
u ot dethe
tcurcover
tsni erpage
a uoYas mentioned below:
.emaN e1.sruIncrease
oC eht fothe
ezifont
s tnosize
f ehof
t esthe
aerCourse
cnI .1 Name.
aP revoC e2.ht nuse
i rethe
daefollowing
h a sa gniwasolaloheader
f eht esin
u the
.2 Cover Page.

ISREVINUALAGAPPA
APPAGALAUNIVERSITY
Master of Computer Applications
rihT eht ni )46.3:APGC( CA[Accredited
AN yb edarGwith
’+A’’A+’
htiwGrade
detidby
ercNAAC
cA[ (CGPA:3.64) in the Third Cycle
]CGU-DRHM yb ytisrevinU I–yrogeand
taC Graded
sa dedarasG Category–I
dna University by MHRD-UGC]
300 036 – IDUKIARA
KARAIKUDI
K
ITACUDE ECNATSIDDIRECTORATE
FO ETAROTCEOF
– 630 003
RIDDISTANCE EDUCATION
DIGITAL COMPUTER ORGANIZATION
I - Semester

You might also like