315 11 - Digital Computer Organization
315 11 - Digital Computer Organization
sruIncrease
oC eht fothe
ezifont
s tnosize
f ehof
t esthe
aerCourse
cnI .1 Name.
.egaP revoC e2.ht nuse
i rethe
daefollowing
h a sa gniwasolaloheader
f eht esin
u the
.2 Cover Page.
YTISREVINUALAGAPPA
APPAGALAUNIVERSITY
Master of Computer Applications
elcyC drihT eht ni )46.3:APGC( CA[Accredited
AN yb edarGwith
’+A’’A+’
htiwGrade
detidby
ercNAAC
cA[ (CGPA:3.64) in the Third Cycle
]CGU-DRHM yb ytisrevinU I–yrogeand
taC Graded
sa dedarasG Category–I
dna University by MHRD-UGC]
300 036 – IDUKIARA
KARAIKUDI
K – 630 003
315 11 NOITACUDE ECNATSIDDIRECTORATE
FO ETAROTCEOF
RIDDISTANCE EDUCATION
itnem sYou
a egaare
p reinstructed
voc eht etatodpupdate
u ot dethe
tcurcover
tsni erpage
a uoYas mentioned below:
.emaN e1.sruIncrease
oC eht fothe
ezifont
s tnosize
f ehof
t esthe
aerCourse
cnI .1 Name.
aP revoC e2.ht nuse
i rethe
daefollowing
h a sa gniwasolaloheader
f eht esin
u the
.2 Cover Page.
ISREVINUALAGAPPA
APPAGALAUNIVERSITY
Master of Computer Applications
rihT eht ni )46.3:APGC( CA[Accredited
AN yb edarGwith
’+A’’A+’
htiwGrade
detidby
ercNAAC
cA[ (CGPA:3.64) in the Third Cycle
]CGU-DRHM yb ytisrevinU I–yrogeand
taC Graded
sa dedarasG Category–I
dna University by MHRD-UGC]
300 036 – IDUKIARA
KARAIKUDI
K
ITACUDE ECNATSIDDIRECTORATE
FO ETAROTCEOF
– 630 003
RIDDISTANCE EDUCATION
DIGITAL COMPUTER ORGANIZATION
I - Semester
ALAGAPPA UNIVERSITY
[Accredited with ‘A+’ Grade by NAAC (CGPA:3.64) in the Third Cycle
and Graded as Category–I University by MHRD-UGC]
(A State University Established by the Government of Tamil Nadu)
KARAIKUDI – 630 003
DIGITAL COMPUTER
ORGANIZATION
Reviewer
Authors
B Basavaraj, Former Principal and HOD, Department of Electronics and Communication Engineering, SJR College of Science,
Arts & Commerce
Units (1.0-1.3, 1.5-1.10, 2, 3.0-3.2, 4, 5)
Satish K Karna, Ex-Educational Consultant Karna Institute of Technology, Chandigarh
Units (1.4, 3.3-3.10, 6)
Deepti Mehrotra, Professor, Amity School of Engineering and Technology, Amity University, Noida
Units (7-14)
All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.
Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.
Work Order No. AU/DDE/DE1-238/Preparation and Printing of Course Materials/2018 Dated 30.08.2018 Copies - 500
SYLLABI-BOOK MAPPING TABLE
Digital Computer Organization
The term ‘digital’ has become quite common in this age of constantly improving
NOTES technology. It is most commonly used in the fields of electronics and computing
wherein information is transformed into binary numeric form, as in digital
photography or digital audio. Digital systems use discontinuous or discrete values
for representation of information for processing, storage, transmission and input
whereas analog systems use continuous values for the representation of information.
Computer organization helps in optimizing performance-based products.
Software engineers need to know the processing ability of processors. They may
need to optimize software in order to gain the most performance at the least expense.
This can require quite detailed analysis of the computer organization. In a multimedia
decoder, the designers might need to arrange for most data to be processed in the
fastest data path and the various components are assumed to be in place and task
is to investigate the organizational structure to verify the computer parts operates.
Computer organization also helps in planning the selection of a processor for a
particular project. Sometimes certain tasks need additional components as well.
For example, a computer capable of virtualization needs virtual memory hardware
so that the memory of different simulated computers can be kept separated. The
computer organization and features also affect the power consumption and the
cost of the processor.
This book, Digital Computer Organization, follows the self-instruction
mode or the SIM format wherein each unit begins with an ‘Introduction’ to the
topic followed by an outline of the ‘Objectives’. The content is presented in a
simple and structured form interspersed with ‘Check Your Progress’ questions for
better understanding. At the end of the each unit a list of ‘Key Words’ is provided
along with a ‘Summary’ and a set of ‘Self Assessment Questions and Exercises’
for effective recapitulation.
Self-Instructional
8 Material
Number System
BLOCK I
NUMBER SYSTEMS
NOTES
UNIT 1 NUMBER SYSTEM
Structure
1.0 Introduction
1.1 Objectives
1.2 Number Systems
1.2.1 Decimal Number System
1.2.2 Binary Number System
1.2.3 Octal Number System
1.2.4 Hexadecimal Number System
1.2.5 Conversion from One Number System to the Other
1.3 Binary Arithmetic
1.3.1 Binary Addition
1.3.2 Binary Subtraction
1.3.3 Binary Multiplication
1.3.4 Binary Division
1.4 Complements
1.5 Numeric and Character Codes
1.6 Answers to Check Your Progress Questions
1.7 Summary
1.8 Key Words
1.9 Self Assessment Questions and Exercises
1.10 Further Readings
1.0 INTRODUCTION
In this unit, you will learn about number systems and binary codes. In mathematics,
a 'number system' is a set of numbers together with one or more operations, such
as addition or multiplication. The number systems are represented as natural
numbers, integers, rational numbers, algebraic numbers, real numbers, complex
numbers, etc. A number symbol is called a numeral. A numeral system or system
of numeration is a writing system for expressing numbers. For example, the standard
decimal representation of whole numbers gives every whole number a unique
representation as a finite sequence of digits. You will learn about the binary numeral
system or base-2 number system that represents numeric values using two symbols,
0 and 1. This base-2 system is specifically a positional notation with a radix of 2.
It is implemented in digital electronic circuitry using logic gates and the binary
system used by all modern computers. Since binary is a base-2 system, hence
each digit represents an increasing power of 2 with the rightmost digit representing
20, the next representing 21, then 22, and so on. To determine the decimal
Self-Instructional
Material 1
Number System representation of a binary number simply take the sum of the products of the
binary digits and the powers of 2 which they represent. You will also learn about
octal, decimal and hexadecimal numeral system.
NOTES
1.1 OBJECTIVES
A number is an idea that is used to refer amount of things. People use number
words, number gestures and number symbols. Number words are said out loud.
Number gestures are made with some part of the body, usually the hands. Number
symbols are marked or written down. A number symbol is called a numeral. The
number is the idea we think of when we see the numeral, or when we see or hear
the word.
On hearing the word number, we immediately think of the familiar decimal
number system with its 10 digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. These numerals
are called Arabic numerals. Our present number system provides modern
mathematicians and scientists with great advantages over those of previous
civilizations and is an important factor in our advancement. Since fingers are the
most convenient tools nature has provided, human beings use them in counting.
So, the decimal number system followed naturally from this usage.
A number of base, or radix r, is a system that uses distinct symbols of r
digits. Numbers are represented by a string of digit symbols. To determine the
quantity that the number represents, it is necessary to multiply each digit by an
integer power of r and then form the sum of all the weighted digits. It is possible to
use any whole number greater than one as a base in building a numeration system.
The number of digits used is always equal to the base.
There are four systems of arithmetic which are often used in digital systems.
These systems are as follows:
1. Decimal
2. Binary
3. Octal
Self-Instructional 4. Hexadecimal
2 Material
In any number system, there is an ordered set of symbols known as digits. Number System
Collection of these digits makes a number which in general has two parts, integer
and fractional, set apart by a radix point (.). Hence, a number system can be
represented as,
NOTES
n 3 ... a1a0 a
an 1an 2a 1a2
a
3 ... a–
Nb̂ =
Integer Portion
Fractional Portion
m
where, N = A number.
b = Radix or base of the number system.
n = Number of digits in integer portion.
m = Number of digits in fractional portion.
an – 1 = Most Significant Digit (MSD).
a– m = Least Significant Digit (LSD).
and 0 (ai or a–f ) b–1
Base or Radix: The base or radix of a number is defined as the number of
different digits which can occur in each position in the number system.
The decimal number system has a base or radix of 10. Each of the ten
decimal digits 0 through 9, has a place value or weight depending on its position.
The weights are units, tens, hundreds, and so on. The same can be written as the
power of its base as 100, 101, 102, 103,..., etc. Thus, the number 1993 represents
quantity equal to 1000 + 900 + 90 + 3. Actually, this should be written as {1 ×
103 + 9 × 102 + 9 × 101 + 3 × 100}. Hence, 1993 is the sum of all digits multiplied
by their weights. Each position has a value 10 times greater than the position to its
right.
Self-Instructional
Material 3
Number System For example, the number 379 actually stands for the following representation.
100 10 1
2 1
10 10 100
NOTES 3 7 9
3 × 100 + 7 × 10 + 9 × 1
37910 = 3 × 100 + 7 × 10 + 9 × 1
In this example, 9 is the Least Significant Digit (LSD) and 3 is the Most
Significant Digit (MSD).
Example 1.1: Write the number 1936.469 using decimal representation.
Solution: 1936.46910 = 1 × 103 + 9 × 102 + 3 × 101 + 6 × 100 + 4 × 10–1
+ 6 × 10–2 + 9 × 10–3
= 1000 + 900 + 30 + 6 + 0.4 + 0.06 + 0.009
It is seen that powers are numbered to the left of the decimal point starting
with 0 and to the right of the decimal point starting with –1.
The general rule for representing numbers in the decimal system by using
positional notation is as follows:
anan – 1 ... a2a1a0 = an10n + an – 110n–1 + ... a2102 + a1101 + a0100
Where n is the number of digits to the left of the decimal point.
Self-Instructional
4 Material
The numeral 102 (one, zero, base two) stands for two, the base of the Number System
system.
In binary counting, single digits are used for none and one. Two digit
numbers are used for 102 and 112 [2 and 3 in decimal numerals]. For the next
NOTES
counting number, 1002 (4 in decimal numerals) three digits are necessary. After
1112 (7 in decimal numerals) four-digit numerals are used until 11112 (15 in
decimal numerals) is reached, and so on. In a binary numeral, every position
has a value 2 times the value of the position to its right.
A binary number with 4 bits, is called a nibble and a binary number with 8
bits is known as a byte.
For example, the number 10112 actually stands for the following
representation:
10112 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20
=1 × 8 + 0 × 4 +1 × 2 + 1 ×1
10112 = 8 + 0 + 2 + 1 = 1110
In general,
[bnbn – 1 ... b2, b1, b0]2 = bn2n + bn – 12n–1 + ... + b222 + b121 + b020
Similarly, the binary number 10101.011 can be written as follows:
1 0 1 0 1 . 0 1 1
24 23 22 21 20 . 2– 1 2– 2 2– 3
(MSD) (LSD)
10101.0112 = 1 × 24 + 0 × 23 + 1 × 22 + 0 × 21 + 1 × 20
+ 0 × 2–1 + 1 × 2–2 + 1 × 2–3
= 16 + 0 + 4 + 0 + 1 + 0 + 0.25 + 0.125 = 21.37510
In each binary digit, the value increases in powers of two starting with 0 to
the left of the binary point and decreases to the right of the binary point starting
with power –1.
0 000 000 0
1 000 001 1
2 000 010 2
3 000 011 3
4 000 100 4
5 000 101 5
6 000 110 6
7 000 111 7
8 001 000 10
9 001 001 11
10 001 010 12
11 001 011 13
12 001 100 14
13 001 101 15
14 001 110 16
15 001 111 17
16 010 000 20
Self-Instructional
6 Material
1.2.4 Hexadecimal Number System Number System
Self-Instructional
Material 7
Number System Counting in Hexadecimal
When counting in hex, each digit can be incremented from 0 to F. Once it reaches
F, the next count causes it to recycle to 0 and the next higher digit is incremented.
This is illustrated in the following counting sequences: 0038, 0039, 003A, 003B,
NOTES
003C, 003D, 003E, 003F, 0040; 06B8, 06B9, 06BA, 06BB, 06BC, 06BD,
06BE, 06BF, 06C0, 06C1.
1.2.5 Conversion from One Number System to the Other
Binary to Decimal Conversion
A binary number can be converted into decimal number by multiplying the binary
1 or 0 by the weight corresponding to its position and adding all the values.
Example 1.2: Convert the binary number 110111 to decimal number.
Solution: 1101112 = 1 × 25 + 1 × 24 + 0 × 23 + 1 × 22 + 1 × 21 + 1 × 20
= 1 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 1 × 1
= 32 + 16 + 0 + 4 + 2 + 1
= 5510
We can streamline binary to decimal conversion by the following procedure:
Step 1: Write the binary, i.e., all its bits in a row.
Step 2: Write 1, 2, 4, 8, 16, 32, ..., directly under the binary number working
from right to left.
Step 3: Omit the decimal weight which lies under zero bits.
Step 4: Add the remaining weights to obtain the decimal equivalent.
The same method is used for binary fractional number.
Example 1.3: Convert the binary number 11101.1011 into its decimal
equivalent.
Solution:
Step 1: 1 1 1 0 1 . 1 0 1 1
Binary Point
Step 2: 16 8 4 2 1 . 0.5 0.25 0.125 0.0625
Step 3: 16 8 4 0 1 . 0.5 0 0.125 0.0625
Step 4: 16 + 8 + 4 + 1 + 0.5 + 0.125 + 0.0625 = [29.6875]10
Hence, [11101.1011]2 = [29.6875]10
Table 1.3 lists the binary numbers from 0000 to 10000. Table 1.4 lists
powers of 2 and their decimal equivalents and the number of K. The abbreviation
K stands for 210 = 1024. Therefore, 1K = 1024, 2K = 2048, 3K = 3072, 4K =
4096, and so on. Many personal computers have 64K memory this means that
Self-Instructional computers can store up to 65,536 bytes in the memory section.
8 Material
Table 1.3 Binary Numbers Table 1.4 Powers of 2 Number System
52 8 = 6 + remainder 4
6 8 = 0 + remainder 6 (MSD)
Fractional part 0.12 × 8 = 0.96 = 0.96 with a carry of 0
0.96 × 8 = 7.68 = 0.68 with a carry of 7
0.68 × 8 = 5.44 = 0.44 with a carry of 5
0.44 × 8 = 3.52 = 0.52 with a carry of 3
0.52 × 8 = 4.16 = 0.16 with a carry of 4
0.16 × 8 = 1.28 = 0.28 with a carry of 1
0.28 × 8 = 2.24 = 0.24 with a carry of 2
0.24 × 8 = 1.92 = 0.92 with a carry of 1
[416.12]10 = [640.07534121]8
Example 1.13: Convert [3964.63]10 to octal number.
Self-Instructional
12 Material
Solution: Integer part 3964 8 = 495 with a remainder of 4 (LSD) Number System
Self-Instructional
Material 15
Number System Hexadecimal to Decimal Conversion
As in octal, each hexadecimal number is multiplied by the powers of 16, which
represents the weight according to its position and finally adding all the values.
NOTES Another way of converting a hexadecimal number into its decimal equivalent
is to first convert the hexadecimal number to binary and then convert from binary
to decimal.
Example 1.22: Convert [B6A]16 to decimal number.
Solution: Hexadecimal number [B6A]16
[B6A]16 = B × 162 + 6 × 161 + A × 160
= 11 × 256 + 6 × 16 + 10 × 1 = 2816 + 96 + 10 = [2922]10
Example 1.23: Convert [2AB.8]16 to decimal number.
Solution: Hexadecimal number,
[2AB.8]16 = 2 × 162 + A × 161 + B × 160 + 8 × 16–1
= 2 × 256 + 10 × 16 + 11 × 1 + 8 × 0.0625
[2AB.8]16 = [683.5]10
Example 1.24: Convert [A85]16 to decimal number.
Solution: Converting the given hexadecimal number into binary, we have
A 8 5
[A85]16 = 1010 1000 0101
[1010 1000 0101]2 = 211 + 29 + 27 + 22 + 20 = 2048 + 512 + 128 + 4 + 1
[A85]16 = [2693]10
Example 1.25: Convert [269]16 to decimal number.
Solution: Hexadecimal number,
2
[269]16 = 0010 6 9
0110 1001
[001001101001]2 = 29 + 26 + 25 + 23 + 20 = 512 + 64 + 32 + 8 + 1
[269]16 = [617]10
or, [269]16 = 2 × 162 + 6 × 161 + 9 × 160 = 512 + 96 + 9 = [617]10
Example 1.26: Convert [AF.2F]16 to decimal number.
Solution: Hexadecimal number,
[AF.2F]16 = A × 161 + F × 160 + 2 × 16–1 + F × 16–2
= 10 × 16 + 15 × 1 + 2 × 16–1 + 15 × 16–2
= 160 + 15 + 0.125 + 0.0586
[AF.2F]16 = [175.1836]10
Self-Instructional
16 Material
Decimal to Hexadecimal Conversion Number System
One way to convert from decimal to hexadecimal is the hex dabble method. The
conversion is done in a similar fashion, as in the case of binary and octal, taking the
factor for division and multiplication as 16. NOTES
Any decimal integer number can be converted to hex successively dividing
by 16 until zero is obtained in the quotient. The remainders can then be written
from bottom to top to obtain the hexadecimal results.
The fractional part of the decimal number is converted to hexadecimal number
by multiplying it by 16, and writing down the carry and the fraction separately.
This process is continued until the fraction is reduced to zero or the required
number of significant bits is obtained.
Example 1.27: Convert [854]10 to hexadecimal number.
Solution: 854 16 = 53 + with a remainder of 6
53 16 = 3 + with a remainder of 5
3 16 = 0 + with a remainder of 3
[854]10 = [356]16
Example 1.28: Convert [106.0664]10 to hexadecimal number.
Solution: Integer part
106 16 = 6 + with a remainder of 10
6 16 = 0 + with a remainder of 6
Fractional part
0.0664 × 16 = 1.0624 = 0.0624 + with a carry of 1
0.0624 × 16 = 0.9984 = 0.9984 + with a carry of 0
0.9984 × 16 = 15.9744 = 0.9744 + with a carry of 15
0.9744 × 16 = 15.5904 = 0.5904 + with a carry of 15
Fractional part [0.0664]10 = [0.10FF]16
Thus, the answer is [106.0664]10 = [6A.10FF]16
Example 1.29: Convert [65, 535]10 to hexadecimal and binary equivalents.
Solution: (i) Conversion of decimal to hexadecimal number
65,535 16 = 4095 + with a remainder of F
4095 16 = 255 + with a remainder of F
255 16 = 15 + with a remainder of F
15 16 = 0 + with a remainder of F
[65535]10 = [FFFF]16
Self-Instructional
Material 17
Number System (ii) Conversion of hexadecimal to binary number
F F F F
1111 1111 1111 1111
NOTES [65535]10 = [FFFF]16 = [1111 1111 1111 1111]2
A typical microcomputer can store up to 65,535 bytes. The decimal
addresses of these bytes are from 0 to 65,535. The equivalent binary addresses
are from
0000 0000 0000 0000 to 1111 1111 1111 1111
The first 8 bits are called the upper byte and second 8 bits are called lower
byte.
When the decimal is greater than 255, we have to use both the upper byte
and the lower byte.
Hexadecimal to Octal Conversion
This can be accomplished by first writing down the 4-bit binary equivalent of
hexadecimal digit and then partitioning it into groups of 3 bits each. Finally, the 3-
bit octal equivalent is written down.
Example 1.30: Convert [2AB.9]16 to octal number.
Solution: Hexadecimal number 2 A B . 9
4 bit numbers 0010 1010 1011 . 1001
3 bit pattern 001 010 101 011 . 100 100
Octal number 1 2 5 3 . 4 4
[2AB.9]16 = [1253.44]8
Example 1.31: Convert [3FC.82]16 to octal number.
Solution: Hexadecimal number 3 F C . 8 2
4 bit binary numbers 0011 1111 1100 . 1000 0010
3 bit pattern 001 111 111 100 . 100 000 100
Octal number 1 7 7 4 . 4 0 4
[3FC.82]16 = [1774.404]8
Notice that zeros are added to the rightmost bit in the above two examples
to make them group of 3 bits.
Octal to Hexadecimal Conversion
It is the reverse of the above procedure. First the 3-bit equivalent of the octal digit
is written down and partitioned into groups of 4 bits, then the hexadecimal equivalent
Self-Instructional of that group is written down.
18 Material
Example 1.32: Convert [16.2]8 to hexadecimal number. Number System
Binary Fractions
A binary fraction can be represented by a series of 1 and 0 to the right of a binary
point. The weights of digit positions to the right of the binary point are given by
2–1, 2–2, 2–3 and so on.
For example, the binary fraction 0.1011 can be written as,
0.1011 = 1 × 2–1 + 0 × 2–2 + 1 × 2–3 + 1 × 2– 4
= 1 × 0.5 + 0 × 0.25 + 1 × 0.125 + 1 × 0.0625
(0.1011)2 = (0.6875)10
Mixed Numbers
Mixed numbers contain both integer and fractional parts. The weights of mixed
numbers are,
23 22 21 . 2–1 2–2 2–3 etc.
Binary Point
For example, a mixed binary number 1011.101 can be written as,
(1011.101)2 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20 + 1 × 2–1 + 0 × 2–2 + 1 × 2–3
= 1 × 8 + 0 × 4 + 1 × 2 + 1 × 1 + 1 × 0.5 + 0 × 0.25 + 1 × 0.125
[1011.101]2 = [11.625]10
Self-Instructional
Material 19
Number System When different number systems are used, it is customary to enclose the
number within big brackets and the subscripts indicate the type of the number
system.
NOTES
Check Your Progress
1. What is the base or radix of a number?
2. What is decimal number system?
3. What is octal number system?
4. Why double dabble method is used?
Self-Instructional
20 Material
Table 1.6 Binary Addition Number System
1 0 + 0 0 0 0 NOTES
2 0 + 1 0 1 1
3 1 + 0 0 1 1
4 1 + 1 1 0 10
Example 1.34: Add the binary numbers (i) 011 and 101, (ii) 1011 and 1110,
(iii) 10.001 and 11.110, (iv) 1111 and 10010, and (v) 11.01 and 101.0111.
Solution: (i) Binary number Equivalent decimal number
11 Carry
011 3
+ 101 5
Sum = 1000 8
Since the circuit in all digital systems actually performs addition that can handle
only two numbers at a time, it is not necessary to consider the addition of more
than two binary numbers. When more than two numbers are to be added, the first
two are added together and then their sum is added to the third number, and so
on. Almost all modern digital machines can perform addition operation in less than
1 s.
Logic equations representing the sum is also known as the exclusive OR function
and can be represented also in Boolean ring algebra as S = AB BA = A B.
Self-Instructional
22 Material
1. 0 × 0 = 0 Number System
2. 0 ×1 = 0
3. 1 × 0 = 0
4. 1 × 1 = 1 NOTES
In a computer, the multiplication operation is performed by repeated additions,
in much the same manner as the addition of all partial products to obtain the full
product. Since the multiplier digits are either 0 or 1, we always multiply by 0 or 1
and no other digit.
Example 1.37: Multiply the binary numbers 1011 and 1101.
Solution: 1011 Multiplicant = 1110
× 1101 Multiplier = ×1310
14310
1011
0000 Partial product 14310
1011
1011
10001111 Final product 14310
Self-Instructional
Material 23
Number System
1.4 COMPLEMENTS
1's complement
Binary
2's
7's complement
Octal
8's complement
9's complement
Decimal
10's complement
15's complement
Hexadecimal
16's complement
Self-Instructional
24 Material
01101 Number System
010010
Charging1's to 0's and 0's to 1's
2. 2’s complement of binary number (1 0 1 0 0)2 is NOTES
01011
1
2's 01100
3. The 1’s complement of (10010110)2 is (01101001)2.
4. The 2’s complement of (10010110)2 is (01101010)2.
5. 2’s complement of binary number (11001.11)2 is
1's 00110.00
1
2's 00110.01
Self-Instructional
Material 25
Number System Example 1.40
1. 9’s complement of decimal number (567)10 is
999
NOTES 567
432 10
2. 10th complement of (5370)10 is
9999
5370
4629
1
4630
3. The 9’s complement of (2496)10 would be (7503)10.
4. The 10’s complement of (2496)10 is (7504)10.
Hexadecimal Number in Complement Form
The 15’s and 16’s complements are defined with respect to the hexadecimal number
system. The 15’s complement is obtained by subtracting each hex digit from 15.
The 16’s complement is obtained by adding ‘1’ to the 15’s complement.
Example 1.41
1. 15’s complement of hexadecimal number (2789) 16 is
FFFF
2 7 89
D 8 76 16
Representing numbers within the computer circuits, registers and the memory unit
by means of Electrical signals or Magnetism is called NUMERIC CODING. In
the computer system, the numbers are stored in the Binary form, since any number
Self-Instructional
26 Material
can be represented by the use of 1’s and 0’s only. Numeric codes are divided into Number System
two categories i.e. weighted codes and non-weighted codes. The different type of
the Weighted Codes are:
(i) BCD Code,
NOTES
(ii) 2-4-2-1 Code,
(iii) 4-2-2-1 Code,
(iv) 5-2-1-1 Code,
(v) 7 – 4 – 2 – 1 Code, and
(vi) 8-4-2-1 Code.
The Non-Weighted Codes are of two types i.e.
(i) Non-Error Detecting Codes and,
(ii) Error Detecting Codes.
Character codes
Alphanumeric codes are also called character codes. These are binary codes
which are used to represent alphanumeric data. The codes write alphanumeric
data including letters of the alphabet, numbers, mathematical symbols and
punctuation marks in a form that is understandable and processable by a computer.
All these codes are discussed in detail in unit 6.
1.7 SUMMARY
Decimal number system: The number system that utilizes ten distinct
digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. It is a base 10 system.
Binary number system: A number system that uses only two digits, 0 and
1 known as bits or binary digits. It has a base 2 system.
Nibble: A binary number with 4 bits.
Octal number system: A number system that uses eight digits, 0, 1, 2, 3,
4, 5, 6 and 7. It has base 8.
2421 code: This is a weighted code and its weights are 2, 4, 2 and 1.
Self-Instructional
28 Material
Number System
1.9 SELF ASSESSMENT QUESTIONS AND
EXERCISES
Self-Instructional
Material 29
Number System
1.10 FURTHER READINGS
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
NOTES Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
30 Material
Boolean Algebra and
COMBINATIONAL
NOTES
CIRCUITS
Structure
2.0 Introduction
2.1 Objectives
2.2 Logic Gates and inverter
2.2.1 AND Gate
2.2.2 OR Gate
2.2.3 NAND Gate
2.2.4 NOR Gate
2.2.5 Exclusive OR (XOR) Gates
2.2.6 Exclusive NOR Gates
2.3 Boolean Algebra and Logic Simplification
2.3.1 Laws and Rules of Boolean Algebra
2.3.2 De-Morgan’s Theorems
2.3.3 Simplification of Logic Expressions using Boolean Algebra
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings
2.0 INTRODUCTION
In this unit, you will learn about the logic gates and Boolean algebra. A logic gate
is an electronic circuit that makes logic decisions. Logic gates have only one output
and two or more inputs - except for the NOT gate, which has only one input. The
output signal appears only for certain combinations of input signals. Gates do the
manipulation of binary information. To make logic decisions, three basic logic circuits
(called gates) are used: the OR circuit, the AND circuit and the NOT circuit.
Logic gates are building blocks, which are available in the form of various IC
families. Gates are blocks of hardware that produce signals of binary 1 or 0 when
the logic input requirements are satisfied. Each gate has a distinct graphic symbol
and its operation can be described by means of an algebraic function. Logic gates
provide a simple and straight-forward method of minimizing Boolean expressions.
Self-Instructional
Material 31
Boolean Algebra and
Combinational Circuits 2.1 OBJECTIVES
Self-Instructional
32 Material
Table 2.1 2-Input AND Gate Table 2.2 3-Input AND Gate Boolean Algebra and
Combinational Circuits
Inputs Output Inputs Output
A B Y A B C Y
0 0 0 0 0 0 0 NOTES
0 1 0 0 0 1 0
1 0 0 0 1 0 0
1 1 1 0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1
2.2.2 OR Gate
The OR gate is a digital logic gate that implements logical disfunction. A basic circuit
has two or more inputs and a single output and it operates in accordance with the
following definition: The output of an OR gate assumes state 1 if one or more (all)
inputs assume state 1.
From the truth table it can be seen that all switches must be opened (0 state)
for the light to be off (output 0 state). This type of circuit is called an OR gate.
Table 2.3 shows the truth table of three input or gates.
Table 2.3 Truth Table of Three-Input OR Gates
Inputs Outputs
A B C Y=A+B+C
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
Self-Instructional
Material 33
Boolean Algebra and Table 2.4 is the truth table for two-input OR gate. The OR gate is an ANY
Combinational Circuits
OR ALL gate; an output occurs when any or all of the inputs are high. Table 2.5
shows binary equivalent details in which A and B are determined for inputs and Y
= A+B is expressed for output.
NOTES
Table 2.4 Two-Input OR Gate Table 2.5 Binary Equivalent
A B Y=A+B A B Y=A+B
Low Low Low 0 0 0
Low High High 0 1 1
High Low High 1 0 1
High High High 1 1 1
C
D
(a) (b) (c)
Self-Instructional
34 Material
Y = AB Boolean Algebra and
A A Y = AB Combinational Circuits
B B
Inputs Outputs
A B NAND Operation AND Operation
Y = AB Y = AB
0 0 1 0
0 1 1 0
1 0 1 0
1 1 0 1
The truth table also explains if one of the inputs is at logic 0, whatever be the
other input [(a) 0 and 0 and (b) 0 and 1], the NAND gate is disabled, i.e., closed
as in AND gate and it does not allow the input signal to pass through, and therefore,
the output remains at logic 1 as shown in Figure 2.4(a). In the case of AND gate,
the output remains at logic 0 for the above condition. On the other hand, when one
of the inputs is at logic 1, whatever be the other input [(c) 1 and 0 and (d) 1 and 1],
the NAND gate is enabled, i.e., opened as in AND gate and it allows the input
signal to pass through. At the output, we get the complement of the input signal,
shown in Figure 2.4(b).
A=0 A=1 Y=
Y=1
B= B=
(a) (b)
Fig. 2.4
The NOR operation is also called the fierce arrow operation and is
symbolically represented by,
NOTES —
Y = AB = A B
The NOR gate first performs the OR operation on the inputs, and then
performs the NOT operation on the OR sum. Read the expression as ‘Y equals
NOT A OR B’ or ‘Y equals the complement of A OR B’.
Logic Symbol: The schematic symbol of the NOR gate is the OR symbol
with a small circle on the output. The small circle represents the operation of
inversion. It is shown in Figure 2.5.
Y=A+B
A A Y=A+B
B B
Truth Table: Table 2.7 (Truth table) shows that the NOR output in each
case is the inverse of the OR output. The same operation can be extended to
NOR gates with more than two inputs.
Table 2.7 NOR Gate
Inputs Outputs Y
(a) 0 0 1 0
(b) 0 1 0 1
(c) 1 0 0 1
(d) 1 1 0 1
Self-Instructional
36 Material
Boolean Algebra and
A
A AB Combinational Circuits
B Y = AB + BA A
Y=A+B
NOTES
A B
(b) Logic symbol
B BA
B
(a) XOR Gate using Basic Gates XOR Gate
The logic symbol for the XOR gate is shown in Figure 2.6(b) and the truth
table for the XOR operation is given in Table 2.8.
Table 2.8 Truth Table for XOR Operation
Inputs Output
A B Y=AB
0 0 0
0 1 1
1 0 1
1 1 0
The truth table of the XOR gate shows the output is HIGH when any, but
not all, of the inputs is at 1. This exclusive feature eliminates a similarity to the OR
gate. The XOR gate responds with a HIGH output only when an odd number of
inputs is HIGH. When there is an even number of HIGH inputs, such as two or
four, the output will always be LOW.
Note the unique XOR symbol with a circle around the + symbol of OR.
Read the expression as ‘Y equals A exclusively ORed with B’.
A (A + B)
B
Y = (A + B)(AB) A Y
B
AB AB
Self-Instructional
Material 37
Boolean Algebra and which in the statement form is read as ‘if A = 1 OR B = 1 but NOT
Combinational Circuits
simultaneously, then Y = 1’. This function is implemented in logic form as shown in
Figrue 2.7.
AB
AB
A Y = (AB)(AB)
B
AB
AB
A Y = AB + BA
B
It should be noted that the same truth table applies when adding two binary
digits (bits). A 2-input XOR circuit is, therefore, sometimes called a modulo-2
order or a half-adder. The name half-adder refers to the fact that a possible
carry-bit, resulting from an addition of two preceding bits, has not been taken into
Self-Instructional
38 Material
account. A full additioin is performed by a second XOR circuit with the output Boolean Algebra and
Combinational Circuits
signal of the first circuit and the carry as input signals.
A A+B
B NOTES
C
Y=A+B+C
Fig. 2.9 Cascading of Two XOR Circuits
= ( AB AB )C ( AB )( AB) C
Control Signal
Fig. 2.10 Control Input and Logic Variable Input Self-Instructional
Material 39
Boolean Algebra and 2.2.6 Exclusive NOR Gates
Combinational Circuits
The exclusive NOR circuit, abbreviated XNOR, is the last of the seven basic logic
gates. The XNOR gate is followed by an inverter. The XNOR output is LOW when
the inputs have an odd number of 1s. The graphic symbol of XNOR gate is shown
NOTES in Figure 2.11.
A Y=A+B A Y=A+B
B B
The truth table is given in Table 2.10. Note, in the output column that Y is
the complement of output of the XOR gate. The Boolean expression for the XNOR
gate is,
Y=A B AB AB
Read the expression as ‘Y equals A exclusively NORed with B’.
According to De Morgan’s theorem, AB AB AB AB
AB AB = ( A B) . (A B) AB AB
The 2-input XNOR gate is immensely useful for bit comparison and it
recognizes when the two inputs are identical. Hence, this gate is also called the
comparator or the coincidence circuit. XNOR gate is also used as an even parity
generator.
Table 2.10 Truth Table for XNOR Gate
Inputs Output
A B Y= A B
0 0 1
0 1 0
1 0 0
1 1 1
SIMPLIFICATION
Boolean algebra or Boolean logic was developed by English mathematician George NOTES
Boole. It is considered as a logical calculus of truth values and resembles the algebra
of real numbers along with the numeric operations of multiplication xy, addition
x + y, and negation ¬x substituted by the respective logical operations of conjunction
x y, disjunction x y and complement ¬x. These set of rules explain specific
propositions whose result would be either true (1) or false (0). In digital logic, these
rules are used to define digital circuits whose state can be either 1 or 0.
Boolean logic forms the basis for computation in contemporary binary computer
systems. Using Boolean equations, any algorithm or any electronic computer circuit
can be represented. Even one Boolean expression can be transformed into an
equivalent expression by applying the theorems of Boolean algebra. This helps in
converting a given expression to a canonical or standardized form and minimizing
the number of terms in an expression. By minimizing terms and expressions the
designer can use less number of electric components while creating electrical circuits
so that the cost of system can be reduced. Boolean logical operations are performed
to simplify a Boolean expression using the following basic and derived operations.
Basic Operations: Boolean algebra is specifically based on logical
counterparts to numeric operations multiplication xy, addition x + y, and negation –x,
namely conjunction x y (AND), disjunction x y (OR) and complement or negation
¬x (NOT). In digital electronics, the AND is represented as a multiplication, the OR
is represented as an addition and the NOT is denoted with a post fix prime, for
example A which means NOT A. Conjunction is the closest of these three operations.
As a logical operation the conjunction of two propositions is true when both
propositions are true and false otherwise. Disjunction works almost like addition
with one exception, i.e., the disjunction of 1 and 1 is neither 2 nor 0 but 1. Hence, the
disjunction of two propositions is false when both propositions are false and true
otherwise. The disjunction is also termed as the dual of conjunction. Logical negation,
however, does not work like numerical negation. It corresponds to incrementation,
i.e., ¬x = x+1 mod 2. An operation with this property is termed an involution. Using
negation we can formalize the notion that conjunction is dual to disjunction as per
De Morgan’s laws, ¬(x y) = ¬x ¬y and ¬(x y) = ¬x ¬y. These can also be
construed as definitions of conjunction in terms of disjunction and vice versa: x y
= ¬(¬x ¬y) and x y = ¬(¬x ¬y).
Derived operations: Other Boolean operations can be derived from these
by composition. For example, implication xy of is a binary operation, which is
false when x is true and y is false, and true otherwise. It can also be expressed as
xy = ¬x y or equivalently ¬(x ¬y). In Boolean logic this operation is termed as
material implication, which distinguishes it from related but non-Boolean logical
concepts. The basic concept is that an implication xy is by default true.
Boolean algebra, however, does have an exact counterpart called eXclusive-
OR (XOR) or parity, represented as x y. The XOR of two propositions is true only Self-Instructional
Material 41
Boolean Algebra and when exactly one of the propositions is true. Further, the XOR of any value with
Combinational Circuits
itself vanishes, for example x x = 0. Its digital electronics symbol is a hybrid of the
disjunction symbol and the equality symbol. XOR is the only binary Boolean operation
that is commutative and whose truth table has equally many 0s and 1s.
NOTES Another example is x|y, the NAND gate in digital electronics, which is false
when both arguments are true and true otherwise. NAND can be defined by
composition of negation with conjunction because x |y = ¬(x y). It does not have
its own schematic symbol and is represented using an AND gate with an inverted
output. Unlike conjunction and disjunction, NAND is a binary operation that can be
used to obtain negation using the notation ¬x = x|x. Using negation one can define
conjunction in terms of NAND through x y = ¬(x|y) from which all other Boolean
operations of nonzero parity can be obtained. NOR, ¬(x y), is termed as the
evident dual of NAND and is equally used for this purpose. This universal character
of NAND and NOR has been widely used for gate arrays and also for integrated
circuits with multiple general-purpose gates.
In logical circuits, a simple adder can be made using an XOR gate to add the
numbers and a series of AND, OR and NOT gates to create the carry output. XOR
is also used for detecting an overflow in the result of a signed binary arithmetic
operation, which occurs when the leftmost retained bit of the result is not the same
as the infinite number of digits to the left.
2.3.1 Laws and Rules of Boolean Algebra
Boolean algebra is a system of mathematical logic. Properties of ordinary algebra
are valid for Boolean algebra. In Boolean algebra, every number is either 0 or 1.
There are no negative or fractional numbers. Though many of these laws have
already been discussed they provide the tools necessary for Boolean expressions.
The following are the basic laws of Boolean algebra:
Laws of Complementation
The term complement means to invert, to change 1s to 0s and 0s to 1s. The
following are the laws of complementation:
Law 1 0 1
Law 2 10
Law 3 A A
OR Laws AND Laws
Law 4 0+0=0 Law 12 0.0 = 0
Law 5 0+1=1 Law 13 1.0 = 0
Law 6 1+0=1 Law 14 0.1 = 0
Law 7 1+1=1 Law 15 1.1 = 1
Law 8 A+0=A Law 16 A.0 = 0
Law 9 A+1=1 Law 17 A.1 = A
Law 10 A+A=A Law 18 A.A = 0
Law 11 A A 1 Law 19 A. A 0
Self-Instructional
42 Material
Laws of ordinary algebra that are also valid for Boolean algebra are: Boolean Algebra and
Combinational Circuits
Commutative Laws
Law 20 A+B=B+A
Law 21 A.B= B.A NOTES
Associative Laws
Law 22 A ( B C ) ( A B) C
Law 23 A.( BC ) ( AB).C
Distributive Laws
Law 24 A.( B C ) A.B A.C
Law 25 A BC ( A B).( A C )
Law 26 A ( A.B) A B
Self-Instructional
Material 43
Boolean Algebra and 2.3.2 De-Morgan’s Theorems
Combinational Circuits
A great mathematician, De Morgan contributed two of the most important theorems
of Boolean algebra. De Morgan’s theorems are extremely useful in simplifying an
expression in which the product of the sum of variables is complemented. The two
NOTES theorems are as follows:
1. Theorem 1: A B C...... A.B .C ......
2. Theorem 2: A.B.C...... A B C ......
The complement of an OR sum equals the AND product of the complements.
The complement of an AND product is equal to the OR sum of the complements.
These two theorems can be easily proved by checking each one for all values
of A, B, C, etc.
The complement of any Boolean expression may be found by means of these
theorems. In these rules, two steps are used to form a complement.
1. The + symbols are replaced with • symbols and • symbols with + symbols.
2. Each of the terms in the expression is complemented.
Implications of De Morgan’s Theorems
Consider Theorem I, A B A.B
The left-hand side of the equation can be viewed as the output of a NOR gate
whose inputs are A and B. The right-hand side of the equation is the result of first
inverting both A and B and then putting them through an AND gate. These two
representations are equivalent as shown in the Figure 2.12. Hence, an AND gate
with inverters on each of its inputs is equivalent to a NOR gate.
(a) (b)
(a) (b)
0 0 1 1 0 0 1 1 1 1
NOTES
0 1 1 0 1 0 0 1 1 0
1 0 0 1 1 0 0 1 1 0
1 1 0 0 1 1 0 0 0 0
Consensus Theorem
AB AC BC = AB AC
AB AC BC = AB AC ( A A) BC
= AB AC ABC ABC
= AB AC AB AC = AB AC
Dual of Consensus Theorem
It can be stated as
( A B )( A C )( B C ) = ( A B )( A C )
( AA AC AB BC )( B C ) = AA AC AB BC
( AC AB BC )( B C ) = AC AB BC
Solution: (a) A B C A B . C
( A B).C AC BC
(b) A B CD ( A B ) . CD
AB . CD
( A.B ) . CD A BCD Self-Instructional
Material 45
Boolean Algebra and
Combinational Circuits (c) ( A B)CD E F A B(C D) E F
( AC AD BC B D E F )
NOTES ( AC ) ( AD) ( BC ) ( B D) ( E ) ( F )
( A C )( A D)( B C )( B D)( E )( F )
[ A A AD C A CD ]( B BD C B CD ) EF ( A A A)
[ A AD C A CD ]( B BD C B CD ) EF
[ A(1 D) C A CD ][ B(1 D ) C B CD ]EF
[ A C A CD ][ B C B CD ]EF (1 A 1)
[ A(1 C ) CD ][ B (1 C ) CD ]EF
[ A CD ][ B CD ]EF
[ A B ACD BCD CD.CD ]EF
[ AB CD ( A B) CD ]EF
[ AB CD ( A B 1)]E F
[ A B CD] E F
A
A
Y=A+B
B
B
(c) OR Gate
Self-Instructional
46 Material
Boolean Algebra and
A
A Combinational Circuits
Y=A+B
NOTES
B
B
(d) NOR Gate
(A B) . (A B) (A B) AB
3. OR Gate Equivalent:
Output of I NAND gate = A
Output of II NAND gate = B
Output of III NAND gate = A . B A B A B
A Y=A+B
A
Y=A B
(a) NOT Gate (b) OR Gate
A
A
Y = AB
B
B
(c) AND Gate
Self-Instructional
Material 47
Boolean Algebra and A
Combinational Circuits A
AB Y = AB
NOTES
B
B
(d) NAND Gate
Solution: A BC A.BC
A. B C
Example 2.5: Complement the expression AB + CD .
Solution: A B CD A B CD A B C D
Example 2.6: Using Boolean algebra simplify the following expression:
Y = ABC + ABC + ABC + ABC
Realize the simplified expression for the above equation using basic logic gates.
Solution: Y AB C A B C A BC A B C
B C ( A A ) A BC A B C
B C .1 A B C A B C ( A A 1)
B C A B C A BC
B (C AC ) ABC
B (C A) ABC ( C AC C A)
BC AB ABC
BC A( B BC )
BC A( B C )
BC AB AC AB BC CA
Self-Instructional
48 Material
Boolean Algebra and
Combinational Circuits
NOTES
A B . B A . C A BC AB C
2.5 SUMMARY
Self-Instructional
Material 51
Boolean Algebra and Long Answer Questions
Combinational Circuits
1. Write the logic diagram of XNOR gate using basic gates.
2. Show how a 2-input XOR gate can be constructed from the 2-input NAND
NOTES gates.
3. Explain De-Morgan’s theorem with the help of an example.
4. Discuss the laws and rules of Boolean algebra.
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
52 Material
Simplification of
EXPRESSIONS
NOTES
Structure
3.0 Introduction
3.1 Objectives
3.2 SOP and POS Expressions
3.2.1 Minterm
3.2.2 Maxterm
3.2.3 Deriving Sum of Product (SOP) Expression
3.2.4 Deriving Product of Sum (POS) Expression from a Truth Table
3.3 Karnaugh Map (K-map)
3.3.1 K-Map Simplification for Two Variables Using SOP Form
3.3.2 K-Map with Three Variables Using SOP Form
3.3.3 K-Map Simplification for Four Variables Using SOP Form
3.3.4 Five-Variable K-Map
3.3.5 K-Map Using POS Form
3.4 Quine–McCluskey Method
3.5 Two Level Implementation of Combinational Circuits
3.5.1 Types of Combinational Circuits
3.5.2 Implementation of Combinational Circuits
3.6 Answers to Check Your Progress Questions
3.7 Summary
3.8 Key Words
3.9 Self Assessment Questions and Exercises
3.10 Further Readings
3.0 INTRODUCTION
In this unit, you will learn about the SOP and POS expressions and Karnaugh map
minimizations. DeMorgan’s theorem states that the inversion bar of an expression
may be broken at any point and the operation at that point replaced by its opposite,
i.e., AND replaced by OR and vice versa. Karnaugh maps provide a pictorial
method of grouping together expressions with common factors and therefore
eliminating unwanted variables. The Karnaugh map can also be described as a
special arrangement of a truth table. The input-output relationship of the binary
variables for each gate can be represented in tabular form in a truth table. A truth
table is a compact way of representing the statements that define the values of
dependent variables. However, it is often far more convenient to use mathematical
descriptions for binary variables.
Self-Instructional
Material 53
Simplification of
Expressions 3.1 OBJECTIVES
Logical functions are generally expressed in terms of logical variables. Values taken
on by the logical functions and logical variables are in the binary form. An arbitrary
logic function can be expressed in the following forms:
1. Sum of Products (SOP)
2. Product of Sums (POS)
Product term: The AND function is referred to as a product. In Boolean algebra,
the word “product” loses its original meaning but serves to indicate an AND function.
The logical product of several variables on which a function depends is considered
to be a product term. The variables in a product term can appear either in
complemented or uncomplemented form. A B C , for example, is a product term.
Sum term: An OR function (+ sign) is generally used to refer a sum. The logical
sum of several variables on which a function depends is considered to be a sum
term. Variables in a sum term can appear either in complemented or
uncomplemented form. A B C , for example, is a sum term.
Sum Of Products (SOP): The logical sum of two or more logical product terms,
is called a Sum of Products expression. It is basically an OR operation of AND
operated variables such as :
1. Y AB BC AC
2. Y AB A C BC
Product Of Sums (POS): A product of sums expression is a logical product of
two or more logical sum terms. It is basically an AND operation of OR operated
variables such as:
1. Y (A B )(B C )(C A )
2. Y (A B C )(A C )
3.2.1 Minterm
A product term containing all the K variables of the function in either complemented
or uncomplemented form is called a Minterm. A 2-variable function has four
Self-Instructional
54 Material
possible combinations, viz. AB, AB, AB, and AB. These product terms are called Simplification of
Expressions
minterms or standard products or fundamental products. For a 3-binary input
variable function, there are 8 minterms as shown in Table 3.1. Each minterm can
be obtained by the AND operation of all the variables of the function. In the
minterm, a variable appears either in uncomplemented form, if it possesses a value NOTES
of 1 in the corresponding combination, or in complemented form, if it contains the
value 0. The minterms of a 3-variable function can be represented by
m 0 , m1 , m 2 , m 3 , m 4 , m 5 , m 6 and m7 ; the suffix indicates the decimal code
corresponding to the minterm combination.
Table 3.1 The Minterm Table
A B C Minterm
0 0 0 ABC
0 0 1 ABC
0 1 0 A BC
0 1 1 A BC
1 0 0 AB C
1 0 1 AB C
1 1 0 ABC
1 1 1 ABC
The main property of a minterm is that it possesses the value 1 for only one
combination of K input variables; i.e., for a K variable function of the 2K minterms,
only one minterm will have the value 1, while the remaining 2K–1 minterms will
possess the value 0 for an arbitrary input combination. For example, as shown in
Table 3.1, for input combination 010, i.e., for A = 0, B = 1 and C = 0, only the
minterm A BC will have the value 1, while the remaining seven minterms will have
the value 0.
Canonical Sum of Product Expression: It is defined as the logical sum of all the
minterms derived from the rows of a truth table, for which the value of the function
is 1. It is also called a minterm canonical form. The canonical sum of product
expression can be given in a compact form by listing the decimal codes in
correspondence with the minterm containing a function value of 1. For example, if
the canonical sum of product form of a 3-variable logic function Y has three minterms
and ABC , this can be expressed as the sum of the decimal codes
A BC , AB C
corresponding to these minterms as stated below:
Y m (0,5,6 )
m0 m5 m 6
A BC ABC ABC
where, m (0,5,6 ) represents the summation of minterms corresponding to
the decimal codes 0, 5 and 6.
Using the following procedure, the canonical sum of product form of a logic
function can be obtained:
Self-Instructional
Material 55
Simplification of 1. Examine each term in the given logic function. Retain it if it is a minterm;
Expressions
continue to examine the next term in the same manner.
2. Check for variables that are missing in each product which is not a
NOTES minterm. Multiply the product by ( X X ), for each variable X that is
missing.
3. Multiply all the products and omit the redundant terms.
The above procedures can be explained with the following examples.
Example 3.1: Obtain the canonical sum of product form of the function
Y (A, B) = A + B
Solution: The given function containing the two variables A and B has the variable
B missing in the first term and the variable A missing in the second. Therefore, the
first term has to be multiplied by (B B ), the second term by ( A A) as given
below:
A B A 1 B 1
A (B B ) B ( A A)
AB AB BA BA
AB AB A B ( AB AB AB )
Y ( A, B) A B AB AB A B
3.2.2 Maxterm
A sum term containing all the K variables of the function in either complemented or
uncomplemented form is called a Maxterm. A 2-variable function has four possible
combinations, viz. A B , A B , A B and A B . These sum terms are called
maxterms. So also, a 3-binary input variable function has 8 maxterms as shown in
Table 3.2. Each maxterm can be obtained by the OR operation of all the variables
of the function. In a maxterm, a variable appears either in uncomplemented form if
it possesses the value 0 in the corresponding combination or in complemented
form if it contains the value 1. The maxterms of a 3-variable function can be
represented by M0, M1, M2, M3, M4, M5, M6 and M7; the suffix indicates the
decimal code corresponding to the maxterm combination.
Table 3.2 The Maxterm Table
A B C Maxterm
0 0 0 A B C
0 0 1 A B C
0 1 0 A B C
0 1 1 A B C
1 0 0 A B C
1 0 1 A B C
1 1 0 A B C
1 1 1 A B C
Self-Instructional
56 Material
The most important property of a maxterm is that it possesses the value 0 Simplification of
Expressions
for only one combination of K input variables; i.e., for a K variable function of the
2K maxterms, only one maxterm will have the value 0, while all the remaining 2K–
1 maxterms will have the value 1 for an arbitrary input combination. For example,
for input combination 101, i.e., for A = 1, B = 0 and C = 1, only the maxterm NOTES
(A B C ) will have the value 0, while the remaining seven maxterms will have the
value 1. This can be studied in Table 3.2.
From Tables 3.1 and 3.2, it is found that each maxterm is the complement
of the corresponding minterm. For example, if the maxterm is (A B C ), then its
complement (i.e., A B C ) A BC is its corresponding minterm.
Canonical product of sum expression: This is defined as the logical product of
all the maxterms derived from the rows of truth table, for which the value of function
is 0. It is also known as the maxterm canonical form. The canonical product of
sum expression can be given in a compact form by listing the decimal codes
corresponding to the maxterms containing a function value of 0. For example, if
the canonical product of sum form of a 3-variable logic function Y has four maxterms
(A B C ), (A B C ), (A B C ) and (A B C ), then it can be expressed as
the product of decimal codes as given below:
Y (0, 2, 4, 7 )
M 0 . M 2 . M 4 . M7
(A B C )(A B C )(A B C )(A B C )
The following procedure can be used to obtain the canonical product of the
sum form of a logic function:
1. Examine each term in the given logic function. Retain it if it is a maxterm;
continue to examine the next term in the same manner.
2. Check for variables that are missing in each sum, which is not a maxterm.
Add (XX ) to the sum term, for each variable X that is missing.
3. Expand the expression using the distributive property and eliminate the
redundant terms.
The above procedures can be explained with the following examples.
Example 3.2: Express the function Y A B C in (a) canonical SOP and
(b) canonical POS form.
Solution: (a) Canonical sum of products form
Y A BC
A (B B )(C C ) B C (A A)
(AB AB )(C C ) AB C ABC
ABC ABC AB C AB C AB C ABC
ABC ABC AB C AB C ABC [ AB C AB C AB C ]
Y m 7 m 6 m 5 m 4 m1
Self-Instructional
Material 57
Simplification of
Expressions
Therefore, Y (1, 4 , 5, 6 , 7 )
(b) Canonical product of sum form
Y A BC
NOTES (A B )(A C ) [ A B .C (A B )(A C )]
(A B CC )(A C BB )
(A B C )(A B C )(A B C )(A B C )[ (A B )(A B ) A]
Y (A B C )(A B C )(A B C )
Y M 2 M 3 M 0 or Y M 0 M 2 M 3
Therefore, Y (0, 2, 3)
Now, the final SOP expression for the output Y is obtained by summing
(OR operation of) the four product terms as follows:
Y A BC A BC ABC ABC
The procedure for obtaining the output expression in SOP form from a truth
table can be summarised, in general, as follows:
1. Give a product term for each input combination in the table, containing
an output value of 1.
2. Each product term contains its input variables in either complemented
or uncomplemented form. If an input variable is 0, it appears in
complemented form; if the input variable is 1, it appears in
uncomplemented form.
Self-Instructional
58 Material
3. All the product terms are OR operated together in order to produce the Simplification of
Expressions
final SOP expression of the output.
3.2.4 Deriving Product of Sum (POS) Expression from a Truth Table
The Product of Sum (POS) expression for a Boolean (switching) function can NOTES
also be obtained from a truth table by the AND operation of the sum terms
corresponding to the combinations for which the function assumes the value 0. In
the sum term, the input variable appears in an uncomplemented form if it has the
value 0 in the corresponding combination and in the complemented form if it has
the value 1.
Studying the truth table shown in Table 3.3, for a 3-input function Y, we find
that the Y value is 0 for the input combinations 000, 001, 100 and 110 and that
their corresponding sum terms are ( A B C ),( A B C ),( A B C ) and ( A B C )
respectively.
Now the final POS expression for the output Y is obtained by the AND
operation of the four sum terms as follows:
Y ( A B C )( A B C )( A B C )( A B C )
The procedure for obtaining the output expression in POS form from a
truth table can be summarised, in general, as follows:
1. Give a sum term for each input combination in the table, which has an
output value of 0.
2. Each sum term contains all its input variables in complemented or
uncomplemented form. If the input variable is 0, then it appears in an
uncomplemented form; if the input variable is 1, it appears in the
complemented form.
3. All the sum terms are AND operated together to obtain the final POS
expression of the output.
The POS expression for a Boolean (switching) function can also be obtained
from its SOP expression using Y Y as given in the following example.
Consider a function,
Y A BC A BC AB C ABC
Y Y A BC A BC AB C ABC
The complement Y can be obtained by the OR operation of the minterms
which are not available in Y. Therefore,
Y A B C A B C AB C ABC
Y A B C A B C AB C ABC
( A B C )( A B C )( AB C )( ABC )
( A B C )( A B C )( A B C )( A B C )
Self-Instructional
Material 59
Simplification of
Expressions
Check Your Progress
1. Define SOP and POS.
NOTES 2. Define minterm.
3. What is maxterm?
Minterm X Y
XY 0 0
XY 0 1
XY 1 0
XY 1 1
Self-Instructional
60 Material
If variable input is 1, then it is written as it is, else the complement of that Simplification of
Expressions
variable is written.
Minterm Example With Three Variables: Similarly, a function having
three inputs has the minterms that are shown Table 3.5.
NOTES
Table 3.5 Truth Table for Three Variables Minterm
Minterm X Y Z
XY Z 0 0 0
XY Z 0 0 1
XY Z 0 1 0
XYZ 0 1 1
XY Z 1 0 0
XY Z 1 0 1
XY Z 1 1 0
XYZ 1 1 1
F(X, Y) = XY
X Y XY
0 0 0
0 1 0
1 0 0
1 1 1
This means that it has a cell for each line for the truth table of a function.
Self-Instructional
Material 61
Simplification of For example, the truth table for the function, F(x,y) = x + y is given ref table
Expressions
3.7.
Table 3.7 Truth Table for the Given Expression
F(X, Y) = X + Y
NOTES
X Y X Y
0 0 0
0 1 1
1 0 1
1 1 1
This function is equivalent to the OR of all of the minterms that have a value
of 1 (SOP form). Thus,
F(X,Y) X Y XY XY XY (1)
Table 3.7 is mapped into K-map in Figure 3.2.
Minimization Technique
SOP form f = Y.
Solution:
In this example, we have the equation as input, and we have one output function.
Draw the K-map for function f with marking 1 for X Y , X Y and XY position.
Now combine two 1’s as shown in the figure below to form the two single terms.
SOP form f = X + Y
Self-Instructional
64 Material
3.3.2 K-Map with Three Variables Using SOP Form Simplification of
Expressions
K-map for three variables is constructed as shown in Figure 3.6.
NOTES
We have placed each minterm in the cell that will hold its value. Please note
that the values for the yz combination at the top of the matrix form a pattern that is
not a normal binary sequence. A K-map must be ordered so that each minterm
differs only in one variable from each neighbouring cell, hence 11 appears before
10 Rule. This helps in simplification.
The first row of the K-map contains all minterms where x has a value of
zero. The first column contains all minterms where y and z both have a value of
zero. Consider the function:
F ( X,Y, Z ) = X Y Z XYZ XY Z XYZ
This grouping tells us that the changes in the variables x and y have no
influence upon the value of the function. They are irrelevant. Refer Figure 3.8.
XY Z XY Z
Solution:
Its K-map is shown below in the figure. There are (only) two groupings of 1s.
Self-Instructional
Material 65
Simplification of
Expressions
NOTES In this K-map, we see an example of a group that wraps around the sides
of a K-map. This group tells us that the values of x and y are not relevant to the
term of the function that is encompassed by the group.
The group in the top row tells us that only the value of x is significant in that
group.
NOTES
Self-Instructional
Material 67
Simplification of
Expressions
NOTES
f W XY WY
NOTES
Self-Instructional
Material 69
Simplification of
Expressions
NOTES
F (W , X ,Y , Z ) W X YZ
A different grouping as shown in Figure 3.17 gives us the function:
F (W , X ,Y , Z ) W Z YZ
Self-Instructional
70 Material
Simplification of
Expressions
NOTES
Self-Instructional
Material 71
Simplification of 3. This technique leads to an expression which is not logically equivalent to
Expressions
that obtained by grouping the 1’s (i.e. the inverse of).
4. Minimizing for the inverse function may be particularly advantageous if there
are many more 0’s than 1’s on the map.
NOTES
5. We can also apply De Morgan’s theorem to obtain a POS expression. For
example, consider Table 3.8.
Table 3.8 Two-Variable Maxterm Representation
X Y Maxterm
0 0 XY
0 1 X Y ’
1 0 X’ Y
1 1 X’ Y ’
f ( A )( B ) POS form
Self-Instructional
72 Material
Example 3.9: Simplify the given function f (A, B) = m (1, 2, 3) using K-map. Simplification of
Expressions
Solution:
NOTES
f ( A )( B )
Minterms U V W X
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
14 1 1 1 0
15 1 1 1 1
No of 1’s Minterms U V W X
1 1 0 0 0 1
1 2 0 0 1 0
1 8 1 0 0 0
2 3 0 0 1 1
2 9 1 0 0 1
2 10 1 0 1 0
3 7 0 1 1 1
3 11 1 0 1 1
3 14 1 1 1 0
4 15 1 1 1 1
Self-Instructional
74 Material
Any two numbers in these groups which differ from each other by only Simplification of
Expressions
one variable can be chosen and combined, to get 2-cell combination, as shown in
Table 3.11.
Table 3.11 Two-Cell Combinations NOTES
Combination U V W X
(1,3) 0 0 - 1
(1,9) - 0 0 1
(2,3) 0 0 1 -
(2,10) - 0 1 0
(8,9) 1 0 0 -
(8,10) 1 0 - 0
(3,7) 0 - 1 1
(3,11) - 0 1 1
(9,11) 1 0 - 1
(10,11) 1 0 1 -
(10,14) 1 - 1 0
(7,15) - 1 1 1
(11,15) 1 - 1 1
(14,16) 1 1 1 -
From the 2-cell combinations, one variable and dash in the same position
can be combined to form 4-cell combinations as shown in Table 3.12.
Table 3.12 Four-Cell Combinations
Combination U V W X
(1,3,9,11) - 0 - 1
(2,3,10,11) - 0 1 -
(8,9,10,11) 1 0 - -
(3,7,11,15) - - 1 1
(10,11,14,15) 1 - 1 -
The cells (1, 3) and (9, 11) form the same 4-cell combination as the cells (1,
9) and (3, 11). The order in which the cells are placed in a combination does not
have any effect. Thus, the (1, 3, 9, 11) combination could be written as (1, 9, 3,
11).
From above 4-cell combination table, the prime implicates table can be
plotted as shown in Table 3.13.
Self-Instructional
Material 75
Simplification of Table 3.13 Prime Implicates Table
Expressions
Prime 1 2 3 7 8 9 10 11 14 15
Implicates
NOTES (1,3,9,11) X - X - - X - X - -
(2,3,10,11) - X X - - - X X - -
(8,9,10,11) - - - - X X X X - -
(3,7,11,15) - - X X - - - X - X
(10,11,14,15) - - - - - - X X X X
Result X X - X X - - - X -
The columns having only one cross mark corresponds to the essential prime
implicants. The prime implicants sum gives the function in its minimal SOP form.
Truth Table: A truth table defines the function of a logic gate by providing a
concise list that shows all the output states in tabular form for each possible
combination of input variable that the gate can encounter (Refer Table 3.14).
Table 3.14 Typical Truth Table
C B A Q
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0
1 0 1 1
1 0 1 0
1 1 0 0
1 1 1 0
Self-Instructional
76 Material
Logic Diagram: This is a graphical representation of a logic circuit that shows the Simplification of
Expressions
wiring and connections of each individual logic gate represented by a specific
graphical symbol that implements the logic circuit (Refer Figure 3.19).
NOTES
Combinational logic circuits are made up from individual logic gates only.
They can also be considered as decision-making circuits. Combinational logic is
about combining logic gates together to process two or more signals in order to
produce at least one output signal according to the logical function of each logic
gate.
Common combinational circuits made up from individual logic gates that
carry out desired applications include multiplexers, demultiplexers, encoders,
decoders, full adder (FAs) and half adders (HAs), etc.
Self-Instructional
78 Material
Simplification of
3.6 ANSWERS TO CHECK YOUR PROGRESS Expressions
QUESTIONS
1. The logical sum of two or more logical product terms, is called a Sum of NOTES
Products expression. It is basically an OR operation of AND operated
variables. A product of sums expression is a logical product of two or more
logical sum terms. It is basically an AND operation of OR operated variables.
2. A product term containing all the K variables of the function in either
complemented or uncomplemented form is called a Minterm.
3. A sum term containing all the K variables of the function in either
complemented or uncomplemented form is called a Maxterm.
4. A K-map is a matrix consisting of rows and columns that represent the
output values of a Boolean function.
5. K-map can be of two forms: SOP form and POS form.
6. If a circuit is designed so that a particular set of inputs can never happen,
we call this set of inputs a don’t care condition.
3.7 SUMMARY
Self-Instructional
80 Material
Simplification of
3.10 FURTHER READINGS Expressions
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd. NOTES
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
Material 81
Combinational Circuits
BLOCK II
COMBINATIONAL CIRCUITS AND
SEQUENTIAL CIRCUITS
NOTES
UNIT 4 COMBINATIONAL
CIRCUITS
Structure
4.0 Introduction
4.1 Objectives
4.2 Combinational Logic
4.3 Adders and subtractors
4.3.1 Full-Adder
4.3.2 Half-Subtractor
4.3.3 Full-Subtractor
4.4 Decoders
4.4.1 3-Line-to-8-Line Decoder
4.5 Encoders
4.5.1 Octal-to-Binary Encoder
4.6 Multiplexer
4.7 Demultiplexer
4.7.1 Basic Two-Input Multiplexer
4.7.2 Four-Input Multiplexer
4.8 Answers to Check Your Progress Questions
4.9 Summary
4.10 Key Words
4.11 Self Assessment Questions and Exercises
4.12 Further Readings
4.0 INTRODUCTION
Logic circuits whose outputs at any instance of time are entirely dependent on the
input signals present at that time are known as combinational circuits. A
combinational circuit has no memory characteristic as its output does not depend
upon any past inputs. A combinational logic circuit consists of input variables, logic
gates and output variables. The design of a combinational circuit starts from the
verbal outline of the problem and ends in a logic circuit diagram or a set of Boolean
functions from which the logic diagram can be easily obtained.
Clock pulse is the vibration of a quartz crystal located inside a computer
that helps in determining the speed of the computer’s processor in MHz or GHz
by counting each pulse. The function of an encoder is to convert decimal value to
Self-Instructional
82 Material
binary value. An encoder is a device that converts information from one format or Combinational Circuits
code to another. It saves memory space. A decoder is a device which does the
reverse of an encoder, undoing the encoding so that the original information can be
retrieved. Multiplexers are used to create digital semiconductors such as CPUs
and graphics controllers. You will also learn about a demultiplexer, which is the NOTES
inverse of the multiplexer, in that it takes a single data input and n address inputs.
It has 2n outputs. The address input determines which data output is going to have
the same value as the data input. The other data outputs will have the value 0.
4.1 OBJECTIVES
The outputs of combinational logic circuits are only determined by their current
input state as they have no feedback, and any changes to the signals being applied
to their inputs will immediately have an effect at the output. In other words, in a
combination logic circuit, the input condition changes state so too does the output
as combinational circuits have no memory. Combination logic circuits are made up
from basic logic AND, OR or NOT gates that are combined or connected together
to produce more complicated switching circuits. As combination logic circuits are
made up from individual logic gates they can also be considered as decision making
circuits and combinational logic is about combining logic gates together to process
two or more signals in order to produce at least one output signal according to the
logical function of each logic gate. Common combinational circuits made up from
individual logic gates include multiplexers, decoders and demultiplexers, full and
half adders etc. One of the most common uses of combination logic is in multiplexer
and demultiplexer type circuits. Here, multiple inputs or outputs are connected to
a common signal line and logic gates are used to decode an address to select a
single data input or output switch. A multiplexer consists of two separate
components, a logic decoder and some solid state switches. Figure 4.1 shows the
hierarchy of combinational logic circuit.
Self-Instructional
Material 83
Combinational Circuits
NOTES
Self-Instructional
84 Material
A Combinational Circuits
SUM = A + B
B
A S
Inputs Half-Adder Outputs
NOTES
B C CARRY = AB
SUM = A + B
A
= AB + BA
B
CARRY = AB
Inputs Outputs
Addend Augend Sum Carry
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
First entry : Inputs : A = 0 and B = 0
Human reponse : 0 plus 0 is 0 with a carry of 0.
Half-adder response : SUM = 0 and CARRY = 0
Second entry: Inputs : A = 1 and B = 0
Human response : 1 plus 0 is 1 with a carry of 0.
Half-adder response : SUM = 1 and CARRY = 0
Self-Instructional
Material 85
Combinational Circuits Third entry: Inputs : A = 0 and B = 1
Human response : 0 plus 1 is 1 with a carry of 0.
Half-adder response : SUM = 1 and CARRY = 0
NOTES Fourth entry: Inputs : A = 1 and B = 1
Human response : 1 plus 1 is 0 with a carry of 1.
Half-adder response : SUM = 0 and CARRY = 1
The SUM output represents the least significant bit (LSB) of the sum. The Boolean
expression for the two outputs can be obtained directly from the truth table.
Ssum = AB AB = ( A B )( A B) A B
C(carry) = AB = ( A B)( A B )( A B)
The implementation of the half-adder circuit using basic gates is shown in Figure
4.3.
A B
AB
S = AB + AB
AB
C = AB
4.3.1 Full-Adder
A half-adder has only two inputs and there is no provision to add a carry coming
from the lower order bits when multi-bit addition is performed. For this purpose,
a third input terminal is added and this circuit is used to add A, B and Cin.
A full-adder is a combinational circuit that performs the arithmetic sum of
three input bits and produces a SUM and a CARRY.
It consists of three inputs and two outputs. Two input variables denoted by
A and B, represent the carry from the previous lower significant position. Two
outputs are necessary because the arithmetic sum of three binary digits ranges
from 0 to 3, and binary 2 or 3 needs two digits. The outputs are designed by the
symbol S (for SUM) and Cout (for CARRY). The binary variable S gives the value
of the LSB (least significant bit) of the SUM. The binary variable Cout gives the
output CARRY.
Self-Instructional
86 Material
Combinational Circuits
NOTES
(a) Logic Symbol of Full-Adder
Half-adder Half-adder
A
B Cin Sum
Cout
= ( AB AB ) Cin Cin ( AB AB )
= AB AB C
in Cin ( AB AB )
= ( A B ) . ( A B ) Cin Cin ( AB AB )
SUM = A B Cin = ABCin ABCin ABCin ABCin
For A = 1, B = 0 and Cin = 1,
= 1 . 0 .1 1 . 0 . 1 1. 0 . 1 1. 0 .1
=0 . 1 . 1+0 . 0 . 0+1 . 1 . 0+1 . 0 . 1=0
The sum of products for
Cout = ABCin ABCin ABCin ABCin ABCin ABCin
= ABCin ABCin ABCin ABCin ABCin ABCin
= BCin [ A A] ACin [ B B] AB[Cin Cin ]
Cout = BCin + ACin + AB = AB + BCin + CinA
Inputs Outputs
Minuend Subtrahend Difference Borrow
A B D C
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
4.3.3 Full-Subtractor
A full-subtractor is a combinational circuit that performs 3-bit subtraction.
The logic symbol of a full-subtractor is shown in Figure. 4.6(a). It has three
inputs, An (minuend), Bn (Subtrahend) and Cn–1 (borrow from previous state) and
two outputs D (difference) and Cn (borrow). The truth table for a full-subtractor is
given in Table 4.4.
Self-Instructional
Material 89
Combinational Circuits
An D
Bn Full
Subtractor
Cn–1 Cn
Bn G1 D
Cn–1
G1
G2 G4 Cn
G3
Inputs Outputs
Minuend Subtrahend Subtrahend Difference Borrowout
An Bn Cn–1 D Cn
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1
Self-Instructional
90 Material
The minterms taken from the truth table gives the Boolean expression (SOP) Combinational Circuits
We notice that the equation for D is the same as the sum output for a full-
adder and the output Cn resembles the carry out for full-adder, except that An is
complemented. From these similarities, we understand that it is possible to convert
a full-adder into a full-subtractor by merely complementing An prior to its application
to the input of gates that form the borrow output as shown in Figure 4.8.
An Bn
Cn –1 00 01 11 10
0 0 1 0 0
1 1 1 1 0
4.4 DECODERS
Many digital systems require the decoding of data. Decoding is necessary in such
applications as data multiplexing, rate multiplying, digital display, digital-to-analog
converters and memory addressing. It is accomplished by matrix systems that can
be constructed from such devices as magnetic cores, diodes, resistors, transistors
and FETs.
Self-Instructional
Material 91
Combinational Circuits A decoder is a combinational logic circuit, which converts binary information
from n input lines to a maximum of 2n unique output lines such that each output line
will be activated for only one of the possible combinations of inputs. If the n-bit
decoded information has unused or don’t care combinations, the decoder output
NOTES will have fewer than 2n outputs.
A decoder is similar to demultiplexer, with one exception there is no data
input.
A single binary word n digits in length can represent 2n different elements of
information.
An AND gate can be used as the basic decoding element because its output
is HIGH only when all of its inputs are HIGH. For example, the input binary is
1011. In order to make sure that all of the inputs to the AND gate are HIGH when
binary number 1011 occurs, then the third bit (0) must be inverted.
If a NAND gate is used in place of the AND gate, a LOW output will
indicate the presence of the proper binary code.
4.4.1 3-Line-to-8-Line Decoder
Figure shows the reference matrix for decoding a binary word of 3 bits. In this
case, 3-inputs are decoded into eight outputs. Each output represents one of the
minterms of the 3-input variables. A 3-bit binary decoder whose control equations
are implemented in Figure 4.9. The operation of this circuit is listed in Table 4.5.
Table 4.5 Truth Table for 3-to-8 Line Decoder
Inputs Outputs
A B C D0 D1 D2 D3 D4 D5 D6 D7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1
Self-Instructional
92 Material
A B C Combinational Circuits
A B C
D0 = ABC NOTES
D1 = ABC
D2 = ABC
D3 = ABC
D4 = ABC
D5 = ABC
D6 = ABC
D7 = ABC
4.5 ENCODERS
AN–1 ON–1
n-inputs m-bit
only one HIGH output code
at a time
Self-Instructional
Material 93
Combinational Circuits
An encoder has n input lines only one of which is active at any time and m output
lines. It encodes one of the active inputs such as a decimal or octal digit to a coded
output such as binary or BCD. Encoders can also be used to encode various
symbols and alphabetic characters. The process of converting from familiar symbols
NOTES or numbers to a coded format is called encoding. In an encoder, the number of
outputs is always less than the number of inputs. The block diagram of an encoder
is shown in Figure 4.10.
4.5.1 Octal-to-Binary Encoder
We know that binary-to-octal decoder (3-line-to-8-line decoder) accepts a 3-bit
input code and activates one of eight output lines corresponding to that code. An
octal-to-binary encoder (8-line-to-3-line encoder) performs the opposite function,
it accepts eight input lines and produces a 3-bit output code corresponding to the
activated input. The logic diagram and the truth table for an octal-to-binary encoder
is shown in Figure 4.11. It is implemented with three 4-input OR gates.The circuit
is designed so that when D0 is HIGH, the binary code 000 is generated, when D1
is HIGH, the binary code 001 is generated and so on.
D7 D6 D5 D4 D3 D2 D1 D0
Y0 = D1 + D3 + D5 + D7
Y1 = D2 + D3 + D6 + D7
Y2 = D4 + D5 + D6 + D7
The design is made simple by the fact that only eight out of the total 2n possible
input conditions are used. Table 4.6 shows the truth table for octal-to-binary
encoder.
Table 4.6 Truth Table Octal-to-Binary Encoder
Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 Y2 Y1 Y0
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1
Self-Instructional
94 Material
Combinational Circuits
4.6 MULTIPLEXER
This type of encoder has ten inputs-one for each decimal input and four outputs
corresponding to the BCD code, as shown in Figure 4.12. The truth table for a NOTES
decimal-to-BCD encoder is given in Table 4.7. From the truth table, we can
determine the relationship between each BCD input and the decimal digits. For
example, the most significant bit of the BCD code, D is a 1 for decimal digit 8 or
9. The OR expression for bit D in terms of decimal digits can therefore the written
D=8+9
The output C is HIGH for decimal digits 4, 5, 6 and 7 and can be written as,
C =4 + 5 + 6 + 7
0
1
2 1
Decimal 3 2 BCD
input output
4 4
5 8
6
7
Self-Instructional
Material 95
Combinational Circuits 9 8 7 6 5 4 3 2 1 0
A (LSB)
NOTES
B
D (MSB)
4.7 DEMULTIPLEXER
Output
n-input signal
signals Multiplexer
Figure 4.15 shows the basic 2 × 1 MUX. This MUX has two input lines A and B
and one ouput line Y. There is one select input lines. When the select input S = 0,
data from A is selected to the output line Y. If S = 1, data from B will be selected NOTES
to the output Y. The logic circuitry for a two-input MUX with data inputs A and B
and select input S is shown in Figure 4.15. It consists of two AND gates G1 and
G2, a NOT gate G3 and an OR gate G4. The Boolean expression for the output is
given by
Y = A S BS
When the select line input S = 0, the expression becomes
Y = A .1 + B . 0 (Gate G1 is enabled)
which indicates that output Y will be identical to input signal A.
Similarly, when S = 1, the expression becomes
Y =A . 0+B . 1=B (Gate G2 is enabled)
showing that output Y will be identical to input signal B.
In many situations a strobe or enable input E is added to the select line S, as
shown in Figure 4.16. The multiplexer becomes operative only when the strobe
line E = 0.
Data select line S (Select line)
A AS
G1
Input
Output
lines G3
A G4 Y
2×1 Y
Input lines B
MUX Output line G2 BS
B
Y = AS + BS
Figure 4.16 shows the logic diagram of 2-input multiplexer with strobe input.
S
Select Strobe or Enable
E
G5
A
G1
Input G3
lines
G4 Y
B
G2
Self-Instructional
Material 97
Combinational Circuits When the strobe input E is at logic 0, the NOT gate G5 is 1 and all AND gates G1
and G2 are enabled. Accordingly, when S = 0 and 1, inputs A and B are selected
as before. When the strobe input E = 1, all lines are disabled and the circuit will
not function.
NOTES
4.7.2 Four-Input Multiplexer
A logic symbol and diagram of a 4-input multiplexer are shown in Figure 4.17. It
has two data select lines S0 and S1 and four data input lines. Each of the four data
input lines is applied to one input of an AND gate.
Depending on S0 and S1 being 00, 01, 10 or 11, data from input lines A to D
are selected in that order. The Boolean expression for the output is given by the
Table 4.8.
Table 4.8 Truth Table for Function Table
S1 S0 Y
0 0 A
0 1 B
1 0 C
1 1 D
S1 S0
G5 G6
A
G1
B
Data select lines G2
S1 S0 Y
G7
C
G3
A
Input B 4×1 Output D
Y
lines C MUX G4
D
Y = A S0 S1 BS 0 S1 CS0 S1 DS0 S1
If S0S1 = 00 (binary 0) is applied to data select lines, the data on input A appears
on the data output line.
Self-Instructional
98 Material
Combinational Circuits
Y =A . 1 . 1+ B . 0 . 1+ C . 1 . 0+D . 0 . 0
= A (Gate G1 is enabled)
Similarly, Y = BS0 S1 = B . 1 . 1 = B when S1S0 = 01 (Gate G2 is enabled)
Y = CS0 S1 = C . 1 . 1 = C when S1S0 = 10 (Gate G3 is enabled) NOTES
Y = DS0S1 = D . 1 . 1 = D when S1S0 = 11 (Gate G4 is enabled)
In a similar style, we can construct 8 × 1 MUXes, 16 × 1 MUXes, etc. Nowadays
two-, four-, eight- and 16-input multiplexes are readily available in the TTL and
CMOS logic families. These basic ICs can be combined for multiplexing a larger
number of inputs.
MultiplexerApplications: Multiplexer circuits find numerous applications in digital
systems. These applications include data selection, data rating, operation
sequencing, parallel to several conversion, waveform generation and logic function
generation.
Self-Instructional
Material 99
Combinational Circuits
6. A digital multiplexer or a data selector (MUX) is a combinational circuit
that accepts several digital data inputs and selects one of them and transmits
information on a single output line.
NOTES 7. A demultiplexer is a combinational logic circuit that receives information on
a single line and transmits this information on one of the many output lines.
4.9 SUMMARY
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
Material 101
Sequential Circuits
5.0 INTRODUCTION
In this unit, you will learn that digital systems can be either asynchronous or
synchronous. Synchronous sequential circuits can change their states only when
clock signals are present. Clock circuits produce rectangular or square waveforms.
You will also learn that a flip flop has a clock input called clocked flip-flop. A
clocked flip-flop is characterized by the fact that it changes states only in
synchronization with the clock pulse.
You will learn about the registers and counters. A register is a group of flip-
flops suitable for storing binary information. Each flip-flop is a binary cell capable
of storing one bit of information. An n-bit register has a group of n flip-flops and is
capable of storing any binary information containing n bits. The register is mainly
used for storing and shifting binary data entered into it from an external source. A
counter, by function, is a sequential circuit consisting of a set of flip-flops connected
in a special manner to count the sequence of the input pulses received in digital
form. Counters are fundamental components of digital system. Digital counters
find wide application like pulse counting, frequency division, time measurement
Self-Instructional and control and timing operations.
102 Material
Sequential Circuits
5.1 OBJECTIVES
5.2 FLIP-FLOPS
Synchronous circuits change their states only when clock pulses are present. The
operation of the basic latch can be modified, by providing an additional control
input that determines, when the state of the circuit is to be changed. The latch with
the additional control input is called the flip-flop. The additional control input is
either the clock or enable input.
Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types, namely, S-
R, J-K, D and T flip-flops.
5.2.1 S-R Flip-Flop
The S-R flip-flop consists of two additional AND gates at the S and R inputs of S-
R latch as shown in Figure 5.1.
In this circuit, when the clock input is LOW, the output of both the AND
gates are LOW and the changes in S and R inputs will not affect the output (Q ) of
the flip-flop. When the clock input becomes HIGH, the value at S and R inputs
will be passed to the output of the AND gates and the output (Q ) of the flip-flop
will change according to the changes in S and R inputs as long as the clock input is
HIGH. In this manner, one can strobe or clock the flip-flop so as to store either a
1 by applying S = 1, R = 0 (to set) or a 0 by applying S = 0, R = 1 (to reset) at any
time and then hold that bit of information for any desired period of time by applying
a LOW at the clock input. This flip-flop is called clocked S-R flip-flop. Self-Instructional
Material 103
Sequential Circuits The S-R flip-flop which consists of the basic NOR latch and two AND
gates is shown in Figure 5.2.
NOTES
The S-R flip-flop which consists of the basic NAND latch and two other
NAND gates is shown in Figure 5.3. The S and R inputs control the state of the
flip-flop in the same manner as described earlier for the basic or unclocked S-R
latch. However, the flip-flop does not respond to these inputs until the rising edge
of the clock signal occurs. The clock pulse input acts as an enable signal for the
other two inputs. The outputs of NAND gates 1 and 2 stay at the logic 1 level as
long as the clock input remains at 0. This 1 level at the inputs of NAND-based
basic S-R latch retains the present state, i.e., no change occurs. The characteristic
table of the S-R flip-flop is shown in truth table of Table 5.1 which shows the
operation of the flip-flop in tabular form.
The D (delay) flip-flop has only one input called the Delay (D ) input and two
outputs Q and Q . It can be constructed from an S-R flip-flop by inserting an
inverter between S and R and assigning the symbol D to the S input. The structure NOTES
of D flip-flop is shown in Figure 5.4(a). Basically, it consists of a NAND flip-flop
with a gating arrangement on its inputs. It operates as follows:
1. When the CLK input is LOW, the D input has no effect, since the set and
reset inputs of the NAND flip-flop are kept HIGH.
2. When the CLK goes HIGH, the Q output will take on the value of the D
input. If CLK =1 and D =1, the NAND gate-1 output goes 0 which is the
S input of the basic NAND-based S-R flip-flop and NAND gate-2 output
goes 1 which is the R input of the basic NAND-based S-R flip-flop.
Therefore, for S = 0 and R = 1, the flip-flop output will be 1, i.e., it follows
D input. Similarly, for CLK=1 and D = 0, the flip-flop output will be 0. If D
changes while the CLK is HIGH, Q will follow and change quickly.
The logic symbol for the D flip-flop is shown in Figure 5.4(b). A simple way
of building a delay D flip-flop is shown in Figure 5.4(c). The truth table of D flip-
flop is given in Table 5.2 from which it is clear that the next state of the flip-flop at
time (Q n 1 ) follows the value of the input D when the clock pulse is applied. As
transfer of data from the input to the output is delayed, it is known as Delay (D )
flip-flop. The D-type flip-flop is either used as a delay device or as a latch to store
1 bit of binary information.
(a) Using NAND Gates (b) Logic Symbol (c) Using S-R Flip-Flop
NOTES
From the above state diagram, it is clear that when D =1, the next state will
be 1; when D = 0, the next state will be 0, irrespective of its previous state. From
the state diagram, one can draw the Present state–Next state table and the
application or excitation table for the Delay flip-flop as shown in Table 5.3 and
Table 5.4 respectively.
Table 5.3 Present State–Next State Table for D Flip-Flop
Qn Qn 1
Excitation Input
D
0 0 0
0 1 1
1 0 0
1 1 1
Using the Present state–Next state table, the K-map for the next state (Q n 1 )
of the Delay flip-flop can be drawn as shown in Figure 5.6 and the simplified
expression for Q n 1 can be obtained as described below..
From the above K-map, the characteristic equation for Delay flip-flop is,
Qn 1 D
Self-Instructional
106 Material
Hence, in a Delay flip-flop, the next state follows the Delay input as Sequential Circuits
(a) J-K Flip-Flop using S-R Flip-Flop (b) Graphic Symbol of J-K Flip-Flop
From the Table 5.6, a Karnaugh map (K-Map) for the next state transition
(Q n 1 ) can be drawn as shown in Figure 5.9 and the simplified logic expression
which represents the characteristic equation of J-K flip-flop can be obtained as
follows.
From the K-map shown in Figure 5.9, the characteristic equation of J-K
flip-flop can be written as,
Qn 1 JQ n KQ n
Another basic flip-flop, called the T or Trigger or Toggle flip-flop, has only a
single data (T) input, a clock input and two outputs Q and Q. The T--type flip-flop
is obtained from a J-K flip-flop by connecting its J and K inputs together. The NOTES
designation T comes from the ability of the flip-flop to ‘toggle’ or complement its
state.
The block diagram of a T flip-flop and its circuit implementation using a J-
K flip-flop are shown in Figure 5.10. The J and K inputs are wired together. The
truth table for T flip-flop is shown in Table 5.8.
(a) Block Diagram of T Flip-Flop (b) T Flip-Flop using a J-K Flip Flop
When the T input is in the 0 state (i.e., J = K = 0) prior to a clock pulse, the
Q output will not change with clocking. When the T input is at 1(i.e., J = K = 1)
level prior to clocking, the output will be in the Q state after clocking. In other
words, if the T input is a logical 1 and the device is clocked, then the output will
change state regardless of what output was prior to clocking. This is called Toggling
hence the name T flip-flop is given.
Table 5.8 Truth Table of T Flip-Flop
Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0
The above truth table shows that when T = 0, then Q n 1 =Q n , i.e., the next
state is the same as the present state and no change occurs. When T = 1, then
Q n 1 = Q n , i.e., the state of the flip-flop is complemented.
Self-Instructional
Material 109
Sequential Circuits
NOTES
From the above state diagram, it is clear that when T = 1, the flip-flop
changes or toggles its state irrespective of its previous state. When T = 1 and
Q n = 0, the next state will be 1 and when T = 1 and Q n = 1, the next state will be
0. Similarly, one can understand that when T = 0, the flip-flop retains its previous
state. From the above state diagram, one can draw the Present state–Next state
table and application or excitation table for the Trigger flip-flop as shown in Table
5.9 and Table 5.10, respectively.
Table 5.9 Present State–Next State Table for T Flip-Flop
Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0
From the Table 5.9, the K-map for the next state (Q n 1 ) of Trigger flip-flop
can be drawn as shown in Figure. 5.12 and the simplified expression for Q n 1 can
be obtained as follows.
From the K-map shown in Figure 5.12, the characteristic equation for Trigger
flip-flop is,
Qn 1 TQ n TQ n
So, in a Trigger flip-flop, the next state will be the complement of the previous
state when T = 1.
Self-Instructional
110 Material
5.2.5 Master–Slave Flip-Flops Sequential Circuits
If J = 1 and K = 0, the master flip-flop sets on the positive clock edge. The
HIGH Q (1) output of the master drives the input ( J ) of the slave. So, when the
Self-Instructional
Material 111
Sequential Circuits negative clock edge hits, the slave also sets. The slave flip-flop copies the action
of the master flip-flop.
If J = 0 and K = 1, the master resets on the leading edge of the CLK pulse.
NOTES The HIGH Q output of the master drives the input (K) of the slave flip-flop. Then,
the slave flip-flop resets at the arrival of the trailing edge of the CLK pulse. Once
again, the slave flip-flop copies the action of the master flip-flop.
If J = K = 1, the master flip-flop toggles on the positive clock edge and the
slave toggles on the negative clock edge. The condition J = K = 0 input does not
produce any change.
Master–Slave flip-flops operate from a complete clock pulse and the outputs
change on the negative transition.
5.3 REGISTERS
Self-Instructional
112 Material
Sequential Circuits
7 8 9
Processing
Decoder
Encoder
Register
Register
4 5 6
Shift
Shift
Uint
1 2 3
0
NOTES
Fig. 5.15 Block Diagram of a Digital System using Shift Registers
There are two modes of operation for registers. The first operation is series
or serial operation. The second type of operation is parallel shifting. Input and
output functions associated with registers include (1) serial input/serial output (2)
serial input/parallel output (3) parallel input/parallel output (4) parallel input/serial
output.
Hence input data are presented to registers in either a parallel or a serial
format.
To input parallel data to a register requires that all the flip-flops be affected
(set or reset) at the same time. To output parallel data requires that the flip-flop Q
outputs be accessible. Serial input data loading requires that one data bit at a time
is presented to either the most or least significant flip-flop. Data are shifted from
the flip-flop initially loaded to the neat one in series. Serial output data are taken
from a single flip-flop, one bit at a time.
Serial data input or output operations require multiple clock pulses. Parallel
data operations only take one clock pulse. Data can be loaded in one format and
removed in another. Two functional parts are required by all shift registers: (1)
data storage flip-flops and (2) logic to load, unload and shift the stored information.
The block diagrams of four basic register types is shown in Figure 5.16.
Registers can be designed using-discrete flip-flops (S-R J-K and D-type). Registers
are also available as MSI.
Serial
data n-bit
input
Serial Serial
data n-bit data
input output
MSB LSB
Parallel data outputs
(a) Serial In/Serial Out (b) Serial In/Parallel Out
Parallel data inputs
n-bit
Serial data
n-bit output
Input
data
D C B A
Q K Q K Q K Q K
Shift pulses
(b) D Type
For register of Figure 5.17 (b) using D FFs, a single data line is connected
between states, again, 4 shift pulse are required to shift a 4-bit word into the 4-
stage register.
The shift pulse is applied to each stage, operating each simultaneously. When
the shift pulse occurs, the date input is shifted into that stage. Each stage is set or
reset corresponding to the input data at the time of shift pulse occurs. Thus the
input data bit is shifted into stage A by the first shift pulse. At the same time the
data of stage A is shifted into stage B, and so on for the following stages. For each
shift pulse, data stored in the register stages shift left by one stage. News data are
shifted into stage A, where as the data present in stage D are shifted out (to the
left) for use by some other shift register or computer unit.
For example, consider starting with all stages reset and applying a steady
logical-1 input a data input to stage A. The data in each stage after each of four
shift pulses is shown in Table 5.11. Notice in Table 5.11 that the logical-1 input
shifts into stage A and the shifts left to stage D after four shift pulses.
As another example, consider shifting of alternate 0 and 1 data into stage A
starting all stages logical-1. Table 5.11 shows the data in each stage after each of
Self-Instructional
four shift pulses.
114 Material
Table 5.11 Operation of Shift-Left Register Sequential Circuits
Shift Pulse D C B A
0 0 0 0 0
1 0 0 0 1 NOTES
2 0 0 1 1
3 0 1 1 1
4 1 1 1 1
As a third example of shift register operation, consider starting with the count
starting with the count in step 4 of Table 5.12 and applying four more shift pulses
while placing a steady logical-0 input as data input to stage A. This is shown in
Table 5.13.
Table 5.12 Shift- Register Operation Table 5.13 Final Stage
Shift Pulse D C B A Shift Pulse D C B A
0 1 1 1 1 0 0 1 0 1
1 1 1 1 0 1 1 0 1 0
2 1 1 0 1 2 0 1 0 0
3 1 0 1 0 3 1 0 0 0
4 0 1 0 1 4 0 0 0 0
Q Q Q Q
CLK
(a)
QA QB QC QD
J Q J Q J Q J Q
Serial
data
input
K Q K Q K Q K Q
CLK
(b)
Fig. 5.18 J-K Flip-Flops in Shift Right Register
When the second clock pulse occurs, the 0 on the data input is “shifted” into the
FF A because FF A RESETs, and the 1 that was in FF A is “shifted” into FF B.
The next 1 in the binary number is now put onto the data-input line, and a clock
pulse is applied. The l is entered into FF A, the 0 stored in FF A is shifted into FF
B, and the l stored in FF B is shifted into FF C. The last bit in the binary number, Self-Instructional
Material 115
Sequential Circuits a l, is now applied to the data input, and a clock pulse is applied. This time the l is
entered into FF A, the l stored in FF A is shifted into FF B, the 0 stored in FF B is
shifted into FF C, and the l stored in FF C is shifted into FF D. This completes the
serial entry of the 4-bit binary number into the shift register, where it can be stored
NOTES for any amount of time. Table 5.15 shows the action of shifting all logical-l inputs
into an initially reset shift register. Table 5.14 shows the register operation for the
entry of 1101.
Table 5.14 Register Operation Table 5.15 Shifting Logical Inputs
Shift Pulse QA QB QC QD Shift Pulse QA QB QC QD
0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 1 1 0 0 0
2 0 1 0 0 2 1 1 0 0
3 1 0 1 0 3 1 1 1 0
4 1 1 0 1 4 1 1 1 1
The waveforms shown in Figure 5.19 illustrate the entry of 4-bit number 0100.
For a J-K FF, the data bit to be shifted into the FF must be present at the J and
K inputs when the clock transitions (low or high). Since the data bit is either a l or
a 0, there are two cases:
1. To shift a 0 into the FF, J = 0 and K = 1,
2. To shift a l into the FF, J = 1 and K = 0,
At time A : All the FFs are reset. The FF output just after time A are QRST =
0000.
At time B : The FFs all contain 0s, the FF outputs are QRST = 0000.
A B C D
Time
Clock 0
J
0
Serial
data
input
K 0
Q
0 0
1
R
0
S 0
0
T 0
0
CLK input
QA QB QC QD
(a) Logic Diagram
Data input
D SRG 4
CLK
QA QB QC QD
(b) Logic Symbol
Self-Instructional
Material 117
Sequential Circuits
When SHIFT/ LOAD is HIGH, AND gates through G1 through G3 are
disabled and AND gates G4 through G6 are enabled, allowing the data bits to shift
right from one stage to the next. The OR gates allow either the normal shifting
operation or the parallel data-entry operation, depending on which AND gates
NOTES
are enabled by the level on the SHIFT/ LOAD input.
A B C D
SHIFT/LOAD
G4 G1 G5 G2 G6 G3
D QA D QB D QC D QD
Serial
A B C D data out
CLK
(a) Logic Diagram
Data in
A B C D
SHIFT/LOAD
Data out
SRG 4
CLK
(b) Logic Symbol
Self-Instructional
118 Material
A B C D Sequential Circuits
D QA D QB D QC D QD
A B C D
NOTES
CLK
QA QB QC QD
5.4 COUNTERS
A B C C
Clock
K QA K QB K QC K QD
QA QB QC QD
Output
(a) Logic Diagram
Clock
input 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
QA
QB
QC
QD
(b) Waveform Diagram
Self-Instructional Fig. 5.24 4-Bit Binary Ripple Counter
120 Material
Table 5.16 State Table of 4-Bit Binary Ripple Counter 9 Sequential Circuits
States
Number of
Clock Pulses QD QC QB QA
0 0 0 0 0 NOTES
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0
13 1 1 0 1
14 1 1 1 0
15 1 1 1 1
0 0 0 0 0
MOD–Number or Modulus
The MOD-number (or the modulus) of a counter is the total number of states
which the counter goes through in each complete cycle.
MOD number = 2N
Where N = Number of flip-flops.
The maximum binary counted by the counter is 2N – 1. Thus, a 4 flip-flop counter
can count as high as (1111) = 24 – 1 = 16 – 1 = 1510. The MOD number can be
increased by adding more FFs to the counter.
5.4.2 Synchronous Counter Operations
A synchronous, parallel, or clocked counter is one in which all stages are triggered
simultaneously.
When the carry has to propagate through a chain of n flip-flops, the overall
propagation delay time is ntpd. For this reason ripple counters are too slow for some
application. To get around the ripple-delay problem, can use a synchronous counter.
A 4-bit (MOD-16) synchronous counter, with parallel carry is shown in
Figure 5.25. The clock is connected directly to the CLK input of each flip-flop,
i.e., the clock pulses drive all flip-flops in parallel. In this counter only the LSB
flip-flop A has its J and K inputs connected permanently to VCC, i.e, at the high
level. The J, K inputs of the other flip-flops are driven by some combination of
flip-flop outputs. The J and K inputs of the flip-flop B are connected to QA output Self-Instructional
Material 121
Sequential Circuits of flip-flop. As the J and K inputs of the FF D are connected with AND operated
output of QA, and QB. Similarly, the J and K inputs of FF D are connected with
AND operated output of QA, QB, QC.
Clock-input
NOTES
JA QA JB QB JC QC JD QD
A B C D
KA QA KB QB KC QC KD QD
VCC (1)
For this circuit to count properly, on a given negative transition of the clock, only
those FFs are supposed to toggle on the negative transition should have J = K =
1 when the negative transition occurs. According to the state Table 5.17, FF A is
required to change state with occurrence of each clock pulse. FF B changes its
state when QA=1. The flip-flop QC toggles only when QA = QB=1. And the flip-
flop QD changes state only when QA=QB=QC=1. In other words, a flip–flop toggles
on the next negative transition clock edge if all lower bits are 1s.
The counting action of counter is as follows:
1. The first negative clock edge sets QA to get Q = 0001.
2. Since QA is 1, FF is conditioned to toggle on the next negative clock edge.
3. When the second negative clock edge arrives, QB and QA simultaneously
toggle and the output word becomes Q = 0010. This process continues.
4. By adding more flip-flops and gate we can build synchronous counter of
any length. The advantage of synchronous counter is its speed, it takes only
one propagation delay time for the correct binary count to appear after the
clock edge hits.
Table 5.17 State Table of 4-bit Binary Ripple Counter
State QD QC QB QA
0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
Self-Instructional
122 Material
9 1 0 0 1 Sequential Circuits
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0
13 1 1 0 1 NOTES
14 1 1 1 0
15 1 1 1 1
0 0 0 0 0
Self-Instructional
Material 123
Sequential Circuits Design of BCD or Decade (MOD-10) Counter
To design a BCD or Decade (MOD-10) counter that has ten states, i.e., 0 to 9
the number of flip-flops required is four. Let us assume that the MOD-10 counter
NOTES has ten states, viz. a, b, c, d, e, f, g, h, i and j.
Step 1. State diagram: Now the state diagram for the MOD-10 counter can be
drawn as shown in Figure. 5.26. Here, it is assumed that the state transition from
one state to another takes place when the clock pulse is asserted. When the clock
is unasserted, the counter remains in the present state.
Step 2. State table: From the above state diagram, one can draw the PS-NS
table as shown in Table 5.18.
Table 5.18 PS-NS Table for MOD-10 Counter
Self-Instructional
124 Material
Table 5.19 PS–NS Table for MOD-10 Counter Sequential Circuits
Step 4. Excitation table: The excitation table having entries for flip-flop inputs
( J 3 K 3 ,J 2 K 2 ,J1 K1 and J 0 K 0 ) can be drawn, from the above PS–NS table using the
application table of JK flip-flop given earlier, as shown in Table 5.20.
Table 5.20 Excitation Table for MOD-10 Counter
PS NS Excitation Inputs
q 3 q 2 q1 q 0 Q 3 Q 2 Q1 Q0 J 3 K 3 J 2 K 2 J1 K 1 J0 K 0
0 0 0 0 0 0 0 1 0 d 0 d 0 d 1 d
0 0 0 1 0 0 1 0 0 d 0 d 1 d d 1
0 0 1 0 0 0 1 1 0 d 0 d d 0 1 d
0 0 1 1 0 1 0 0 0 d 1 d d 1 d 1
0 1 0 0 0 1 0 1 0 d d 0 0 d 1 d
0 1 0 1 0 1 1 0 0 d d 0 1 d d 1
0 1 1 0 0 1 1 1 0 d d 0 d 0 1 d
0 1 1 1 1 0 0 0 1 d d 1 d 1 d 1
1 0 0 0 1 0 0 1 d 0 0 d 0 d 1 d
1 0 0 1 0 0 0 0 d 1 0 d 0 d d 1
…………… …………… …………… ……………
1 0 1 0 d d d d d d d d d d d d
1 0 1 1 d d d d d d d d d d d d
1 1 0 0 d d d d d d d d d d d d
1 1 0 1 d d d d d d d d d d d d
1 1 1 0 d d d d d d d d d d d d
1 1 1 1 d d d d d d d d d d d d
NOTES
Self-Instructional
126 Material
Step 6. Schematic diagram: Using the above excitation equations, the circuit Sequential Circuits
diagram for the MOD-10 counter can be drawn as shown in Figure 5.28.
NOTES
1. The latch with the additional control input is called the flip-flop.
2. Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types,
namely, S-R, J-K, D and T flip-flops.
3. The T-type flip-flop is obtained from a J-K flip-flop by connecting its J and
K inputs together.
4. A register is a group of flip-flops used to store or manipulate data or both.
Each flip-flop is capable of storing one bit of information.
5. A register stores a sequence of 0’s and l’s. Register that are used to store
information are known as memory registers. If they are used to process
information, they are called shift registers.
6. The MOD-number (or the modulus) of a counter is the total number of
states which the counter goes through in each complete cycle.
5.6 SUMMARY
The latch with the additional control input is called the flip-flop. The additional
control input is either the clock or enable input.
Self-Instructional
Material 127
Sequential Circuits Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types,
namely, S-R, J-K, D and T flip-flops.
The D (delay) flip-flop has only one input called the Delay (D ) input and
NOTES
two outputs Q and Q’.
A J-K flip-flop has a characteristic similar to that of an S-R flip-flop. In
addition, the indeterminate condition of the S-R flip-flop is permitted in it.
Inputs J and K behave like inputs S and R to set and reset the flip-flop,
respectively.
T or Trigger or Toggle flip-flop, has only a single data (T) input, a clock
input and two outputs Q and Q’. The T-type flip-flop is obtained from a J-
K flip-flop by connecting its J and K inputs together.
A Master–Slave flip-flop can be constructed using two J-K flip-flops. The
first flip-flop, called the Master, is driven by the positive edge of the clock
pulse; the second flip-flop, called the Slave, is driven by the negative edge
of the clock pulse.
A register is a group of flip-flops used to store or manipulate data or both.
Each flip-flop is capable of storing one bit of information. An n-bit register
has n flip-flop and is capable of storing any binary information containing n-
bits.
Register that are used to store information are known as memory registers.
If they are used to process information, they are called shift registers.
A register which is capable of shifting data either left or right is called a
bidirectional shift register. A register that can shift in only one direction is
called a uni-directional shift register.
The MOD-number (or the modulus) of a counter is the total number of
states which the counter goes through in each complete cycle.
A synchronous, parallel, or clocked counter is one in which all stages are
triggered simultaneously.
Self-Instructional
128 Material
Synchronous: It is the changes in the output occurred at a specified point Sequential Circuits
on a triggering input.
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
Material 129
Data Representation
6.0 INTRODUCTION
The computer stores all information in the form of binary numbers, i.e., all information
stored on a computer is written in machine language that computer understands.
This machine language uses binary numbers which comprise of only two symbols, 0
and 1. Thus, a bit (0 or 1) is the smallest unit of data in the binary system. You have
already studied the number system. In this unit, you will learn about the fixed and
floating point representation, various types of binary codes and how the error can be
detected and corrected after transmission through a channel.
6.1 OBJECTIVES
advantages:
• Digital data is less affected by noise.
• Digital signal allows extra data to be carried over to provide a means for NOTES
detection and correction of errors.
• Processing of digital data is relatively easy. It can be performed in real-time
or non real-time.
• A single type of media can be used to store many different types of data like
video, speech, audio, etc. They may be stored on tape, hard-disk or CD-
ROM.
• A digital system provides more dependable response while an analog
system’s accuracy depends on parameters like component tolerance,
temperature, power supply variations, etc., and therefore, two analog
systems are never identical.
• Digital systems are considered more adaptable and can be reprogrammed
with software. Analog systems need different hardware for any functional
changes.
The disadvantages of digital conversion are:
• Data samples are quantised to given levels and introduce an error called
quantisation error. However, the quantisation error can be reduced by
increasing the number of bits used to represent each sample.
• The analog signal that is sampled at regular intervals to convert it into digital
signal will require large storage space. However, the data once stored tends
to be reliable and will not degrade over time.
S Exponent Mantissa
(E) (fraction, F)
1bit 8 bits 23 bits
Self-Instructional
132 Material
For example, the decimal numbers 0.0003754 and 3754 are represented Data Representation
Self-Instructional
Material 133
Data Representation
NOTES
Figure 6.1 Single-Precision Formats
Self-Instructional
134 Material
Table 6.2 : Equivalent Values of Binary Numbers in Sign Magnitude, 1’s Complement, Data Representation
and 2’s Complement
Self-Instructional
Material 135
Data Representation Example 6.3: Perform 5-4 using 1’s complement representation.
Solution:
5+[-4]
NOTES + 5 0101
4 0100
4
4 1011
5 0101
4 1011 In 1' s form
1000 Generate Carry
If carry is formed, add 1
0000
1
0001 1
2’s Complement
In this case, positive numbers are represented similar to 1’s complement or sign
magnitude representation. To represent negative number, the steps are as follows:
1. Write positive number
2. Take 1’s complement
3. Add 1
For example,
+13 = 01101
13 1101
10010 1' s complement
1
13 10011 2' s complement
Example 6.4: Perform (5-4) using 2’s complement representation.
Solution:
5 0101
–4 0100 = +4
1011 1’s complement
1100 2’s complement
0101 2’s complement of 5
1100 2’s complement of -4
10001 Generate carry
Self-Instructional
136 Material
Carry is discarded Data Representation
0001 i.e. 1
Range for four bits [-8 to 7]
For n Bit: NOTES
-(2n-1) to (2n-1 -1)
-23 to 23 - 1
-8 to 7
Suppose, a number is represented in 2’s complement form as 1011, then its
equivalent decimal value is
1011 2’s complement form
1010 1’s complement form
-0101 Binary representation
–5 Decimal representation
Another Method
Take the 1’s complement and add 1 to that, it will give decimal equivalent
0100
1
0101
5
Similarly, a number is represented in 2’s complement form as 11011 its equivalent
decimal value is
000100
1
000101
5
Further, a number is represented in 2’s complement form as 111011 its
equivalent decimal value is
00100
1
00101
5
In 2’s complement representation to extent number of bit, copy MSB bit
line
1011 = -5
11011 = -5
111011 = -5
MSB is copied Self-Instructional
Material 137
Data Representation Example 6.5: Perform (5 + 4) using 2’s representation.
Solution:
+5 0101 (positive)
NOTES +4 0100 (positive)
1001 (negative)
Over flow
Overflow for Binary Numbers
When two positive numbers or two negative numbers are added, then the extended
range is known as overflow. In signed operation, overflow may occur when two
same sign numbers are added. Let X and Y are sign bit of two numbers. If Z is the
resultant sign bit, then the condition for overflow is
XYZ XYZ
x x
5 1101 1101
y y
4 1100 1100
1100 1101
z
XYZ XYZ
The second method to detect overflow is as follows:
0 No overflow
1 Overflow
where, Cin is the carry into MSB and Cout is carry from MSB
x
5 0101
y
4 0100
1001
z
5 10101 Extented Bit
y
+4 10100
01001
Note: If overflow occurs, then extend the number of bits by copying the MSB.
ai code coefficient
Wi weight
B positive or negative bias
n bits in the code
Example 6.6: Try to convert the binary number 10010110 to a decimal number.
Solution:
It turns out that 100101102 = 15010, but it takes quite a lot of time and effort to
make this conversion without a calculator.
Self-Instructional
Material 139
Data Representation Self-Complementary Codes: Certain codes have a distinct advantage in that
their logical complement is same as the arithmetic complement. The examples
include excess-3, 6311, 2421 codes.
Reflective Codes: The 9’s complement of reflected binary-coded decimal (BCD)
NOTES
code word is achieved simply by changing only one of its bits. A reflected code is
characterized by the fact that it emerges from the central point with one bit changed.
Unit Distance Code (UDC): There is one bit change in the next or adjacent
code word. It is independent of the direction of the code. The UDCs have special
advantage in that they minimize transition error or flashing.
Binary Coded Decimal (BCD): It makes conversion to decimals much easier.
Table 6.3 shows the 4-bit BCD code for the decimal digits 0–9. It should be
noted that the BCD code is a weighted code, that is, its weights are 8-4-2-1. In
BCD code, each decimal digit is represented with four bits. Refer Table 6.3.
Table 6.3 Binary-Coded Decimal Codes
In BCD code, these numbers (1010, 1011, 1100, 1110, 1111) are invalid digits
known as invalid BCD.
Following is an example of a valid BCD code:
(839)10 (1000 0011 1001)BCD
Note: If invalid state occurs in BCD, then we add 6, to get the correct result.
The MSB has a weight of 8 and the LSB has a weight of only 1. This code
is more precisely known as the 8421 BCD code.
The 8421 part of the name gives the weighting of each place in the 4-bit
code. There are several other BCD codes that have other weights for the four
Self-Instructional
140 Material
place values. Because the 8421 BCD code is most popular, it is customary to Data Representation
The excess-3 code for a given decimal number is determined by adding ‘3’
to each decimal digit in the given number and then replacing each digit of the newly
found decimal number by its 4-bit binary equivalent. If the addition of ‘3’ to a digit
produces a carry, as is the case with the digits 7, 8 and 9, that carry should not be
taken forward. The result of addition should be taken as a single entity and
subsequently replaced with its excess-3 code equivalent. Self-Instructional
Material 141
Data Representation Excess-3 Code BCD Code + 0011 Missing: It is unweighted code. In this
code, first 3 and last 3 are invalid. Excess-3 is self-complementing code.
0 - (0011) 1’s complement (1100) - 9
NOTES 1 - (0100) 1’s complement (1011) - 8
2 - (0101) 1’s complement (1010) - 7
2421, 3321, 4221, 4311, 5211 in all code weightage is (Nine); so, it is self-
complementing weighted code.
Example 6.7: Let us find the excess-3 code for the decimal number 597.
Solution:
1. The addition of ‘3’ to each digit yields the three new digits/numbers ‘8,’
‘12’ and ’10.’
2. The corresponding 4-bit binary equivalents are 1000, 1100 and 1010,
respectively.
3. The excess-3 code for 597 is therefore given by: 1000 1100 1010 =
100011001010.
Also, it is normal practice to represent a given decimal digit or number using
the maximum number of digits that the digital system is capable of handling. For
example, in 4-digit decimal arithmetic, 5 and 37 would be written as 0005 and
0037, respectively. The corresponding 8421 BCD equivalents would be
0000000000000101 and 0000000000110111 and the excess-3 code equivalents
would be 0011001100111000 and 0011001101101010.
Decimal equivalent of excess-3 code can be determined by first splitting the
number into four-bit groups, starting from the radix point, and then subtracting
0011 from each 4-bit group. The new number is the 8421 BCD equivalent of the
given excess-3 code, which can subsequently be converted into the equivalent
decimal number.
The complement of the excess-3 code of a given decimal number yields the
excess-3 code for 9’s complement of the decimal number. As adding 9’s
complement of a decimal number B to a decimal number A achieves A – B, the
excess-3 code can be used effectively for both addition and subtraction of decimal
numbers.
Excess-3 code is also known as self-complementing code. Each decimal
digit is coded into a 4-digit binary code. The code for each decimal digit is obtained
by adding decimal 3 to the natural BCD.
Notes:
1. It is unweighted code.
2. It is self-complementary code.
2421 Code: Here, for 2-(0010)2 we can write (1000), but it is not written because
self-complement property is not followed. Refer Table 6.6.
Self-Instructional
142 Material
Table 6.6 Decimal 2421 Code Conversion Data Representation
NOTES
Gray Code: It is an unweighted binary code in which two successive values differ
only by one bit. The maximum error that can creep into a system using the binary
Gray code to encode data is much less than the worst-case error encountered in
the case of straight binary encoding, that is, minimum error code.
Table 6.7 Binary and Gray Code Equivalents of Decimal Numbers 0–15
An examination of the 4-bit Gray code numbers as listed in Table 6.7 shows
that the last entry rolls over to the first entry. That is, the last and the first entry also
differ by only one bit. This is known as the cyclic property of the Gray code, that
is, cyclic permutation code.
A Gray code is a code assigned to each of a contiguous set of integers, or
to each member of a circular list—a word of symbols such that each two adjacent
code words differ by one symbol. These codes are also known as single-distance
codes reflecting the Hamming distance of 1 between adjacent codes. There can
Self-Instructional
Material 143
Data Representation be more than one Gray code for a given word length, but the term was first applied
to a particular binary code for the non-negative integers, the Binary-Reflected
Gray Code (BRGC), the 3-bit version of which is characterised as follows:
1. It is a unweighted code.
NOTES
2. The successive number differs by only one bit.
3. It is also known as UDC
4. The Gray code is known as cyclic code.
5. This is a minimum error code.
Example 6.8: Convert binary number 1011 into Gray code.
Solution:
MSB Gray code = MSB Binary code
From left to right, add each adjacent pair of binary code bits to get the next
Gray code bit and Discard carries
B3 B2 B1 B0
1 0 1 1
G3 G2 G1 G0
1 1 1 0
1011 B
1110 G
Self-Instructional
2. It is unweighted code.
144 Material
3. It is a Unit Distance Code (UDC). Data Representation
Alphanumeric Codes
Alphanumeric codes are also called character codes. These are binary codes
which are used to represent alphanumeric data. The codes write alphanumeric
data including letters of the alphabet, numbers, mathematical symbols and
punctuation marks in a form that is understandable and processable by a computer.
These codes enable us to interface input–output devices such as keyboards, printers,
VDUs, etc., with the computer.
One of the better known alphanumeric codes during the early days of
evolution of computers when punched cards used to be the medium of inputting
and outputting data, is the 12-bit Hollerith code. The Hollerith code was used in
those days to encode alphanumeric data on punched cards.
Two widely used alphanumeric codes include the American Standard Code
for Information Interchange (ASCII) and the Extended Binary-Coded Decimal
Interchange Code (EBCDIC). While the former is popular with microcomputers
and is used nearly in all personal computers and workstations, the latter is mainly
used with larger systems.
American Standard Code for Information Interchange (ASCII Code)
American Standard Code for Information Interchange (ASCII), pronounced as
‘ask-ee’ is strictly a 7-bit code based on the English alphabet. ASCII codes are
used to represent alphanumeric data in computers, communications equipment Self-Instructional
Material 145
Data Representation and other related devices. As it is a 7-bit code, it can at the most represent 128
characters.
It currently defines 95 printable characters including 26 uppercase letters
(A–Z), 26 lowercase letters (a–z), 10 numerals (0–9) and 33 special characters
NOTES
including mathematical symbols, punctuation marks and space character. In addition,
it defines codes for 33 non-printing, mostly obsolete control characters that affect
how text is processed. The 8-bit version can represent a maximum of 256
characters.
When the ASCII code was introduced, numerous computers were dealing
with 8-bit groups (or bytes) as the smallest unit of information. The eighth bit was
commonly used as a parity bit for error detection on communication lines and
other device-specific functions. Machines that did not use the parity bit typically
set the eighth bit to ‘0.’
Some Important Facts about ASCII Code
1. The numeric digits, 0–9, are encoded in sequence starting at 30 H.
2. The upper case alphabetic characters are sequential beginning at 41
H.
3. The lower case alphabetic characters are sequential beginning at 61
H.
4. The first 32 characters (codes 0-1FH) and 7FH are control characters.
They do not have a standard symbol (glyph) associated with them.
They are used for carriage control and protocol purposes. They include
0Dh (CR or carriage return), 0Ah (LF or line feed), 0Ch (FF or form
feed), 08h (BS or backspace).
5. Most keyboards generate the control characters by holding down a
control key (CTRL) and simultaneously pressing an alphabetic
character key.
Advantage of ASCII: The 8-bit Standard International Organization for
Standardization (ISO) was developed as a true extension of ASCII leaving the
original character mapping intact in the process of inclusion of additional values.
This made possible representation of a broader range of languages.
Disadvantage of ASCII: In spite of the standard suffering from incompatibilities
and limitations, ISO-8859-1, its variant Windows-1252 and the original 7-bit
ASCII continue to be the most common character encoding in use today.
Extended Binary-Coded Decimal Interchange Code (EBCDIC)
Extended Binary-Coded Decimal Interchange Code (EBCDIC) pronounced as
‘eb-si-dik’ is another widely used alphanumeric code mainly popular with larger
systems. The code was created by IBM to extend the binary-coded decimal that
existed during those days. All IBM mainframe computer peripherals and operating
systems use EBCDIC code, and their operating systems provide ASCII and
Self-Instructional Unicode modes to allow translation between different encodings.
146 Material
It may be mentioned here that EBCDIC offers no technical advantage over Data Representation
the ASCII code and its variant ISO-8859 or Unicode. Its importance in the earlier
days lay in the fact that it made it relatively easier to enter data into larger machines
with punch cards. Since punch cards are not used on mainframes any more, the
code is used in contemporary mainframe machines solely for backwards NOTES
compatibility.
It is an 8-bit code and thus can accommodate up to 256 characters. A
single byte in EBCDIC is divided into two 4-bit groups called nibbles. The first 4-
bit group, called the ‘zone,’ represents the category of the character, while the
second group, called the ‘digit,’ identifies the specific character.
Unicode Encodings such as ASCII, EBCDIC and their variants do not have a
sufficient number of characters to be able to encode alphanumeric data of all
forms, scripts and languages. As a result, these encodings do not permit multilingual
computer processing. In addition, these encodings suffer from incompatibility. Two
different encodings may use the same number for two different characters or
different numbers for the same characters. Encodings such as ASCII, EBCDIC
and their variants do not have a sufficient number of characters to be able to
encode alphanumeric data of all forms, scripts and languages. As a result, these
encodings do not permit multilingual computer processing. In addition, these
encodings suffer from incompatibility. Two different encodings may use the same
number for two different characters or different numbers for the same characters.
For example, Code 4E (in hex) represents the upper-case letter ‘N’ in ASCII
code and the plus sign ‘+’ in the EBCDIC code.
It is the most complete character encoding scheme that allows text of all
forms and languages to be encoded for use by the computers. It not only enables
the users to handle practically any language and script but also supports a
comprehensive set of mathematical and technical symbols greatly simplifying any
scientific information exchange.
The Unicode standard has been adopted by industry leaders such as HP,
IBM, Microsoft, Apple, Oracle, Unisys, Sun, Sybase, SAP and many more.
In digital systems, the issue of error detection and correction is of great practical
significance. Errors creep into the bit stream owing to noise or other impairments
during the course of its transmission from the transmitter to the receiver. Any such
error, if not detected and subsequently corrected can be disastrous as digital systems
are sensitive to errors and tend to malfunction if the bit error rate is more than a
certain threshold level.
Error detection and correction involves the addition of extra bits known as
check bits to the information-carrying bit stream to give the resulting bit sequence
Self-Instructional
Material 147
Data Representation a unique characteristic that helps in detection and localization of errors. These
additional bits are also called redundant bits as they do not carry any information.
While the addition of redundant bits helps in achieving the goal of making
transmission of information from one place to another error free or reliable, and
NOTES makes inefficient.
When the digital information in the binary form is transmitted from one circuit
or system to another circuit, an error may occur. This means a signal corresponding
to ‘0’ may change to ‘1’ and vice versa.
Parity Code
A parity bit is an extra bit added to a string of data bits in order to detect any error
that might have crept into it while it was being stored or processed and moved
from one place to another in a digital system.
In an even parity, the added bit is such that the total number of ls in the
data bit string becomes even. In odd parity, the added bit makes the total number
of ls in the data bit string odd. This added bit could be a ‘0’ or a ‘1.’
The addition of a single parity cannot be used to detect two-bit errors,
which is a distinct possibility in data storage media such as magnetic tapes. The
single-bit parity code cannot be used to localize or identify the error bit even if one
bit is in error.
Block Parity Codes
If there is n rows and m columns of message bits, an odd parity is added to each
row and an even parity is added to each column. A final check is carried out at the
intersection of the column and row. This will show the location of the faulty bit
such as a bit in the 3rd column and 4th row, pij.
Repetition Code
The repetition code makes use of repetitive transmission of each data bit in the bit
stream. In the case of threefold repetition, ‘1’ and ‘0’ would be transmitted as
‘111’ and ‘000,’ respectively.
If in the received data bit stream bits are examined in groups of three bits,
the occurrence of an error can be detected. In the case of single-bit errors, ‘1’
would be received as 011 or 101 or 110 instead of 111, and a ‘0’ would be
received as 100 or 010 or 001 instead of 000. In both cases, the code becomes
self-correcting if the bit in the majority is taken as the correct bit.
There are various forms in which the data are sent using the repetition code.
Usually, the data bit stream is broken into blocks of bits, and then each block of
data is sent some predetermined number of times. For example, if we want to
send 8-bit data given by 11011001, it may be broken into two blocks of four bits
each. In the case of threefold repetition, the transmitted data bit stream would be
110111011101100110011001. However, such a repetition code where the bit or
Self-Instructional
148 Material
block of bits is repeated three times is not capable of correcting 2-bit errors although Data Representation
The most commonly used Hamming code is the one that has a code word
length of seven bits with four message bits and three parity bits. It is also referred
to as the Hamming (7, 4) code. The code word sequence for this code is written
as P1P2D1P3D2D3D4, with P1, P2 and P3 being the parity bits and D1, D2, D3 and
D4 being the data bits.
The step-by-step process of writing the Hamming code for a certain group
of message bits and then the process of detection and identification of error bits is
given in the following example.
Example 6.10: Write the Hamming code for the 4-bit message 0110 representing
numeral ‘6.’
Solution:
The process of writing the code is illustrated in Table 1.10 with even parity. Thus,
Self-Instructional the Hamming code for 0110 is 1100110.
150 Material
Let us assume that the data bit D1 gets corrupted in the transmission channel. Data Representation
The received code in that case is 1110110. In order to detect the error, the parity
is checked for the three parity relations mentioned above. During the parity check
operation at the receiving end, three additional bits X, Y and Z are generated by
checking the parity status of P1D1D2D4, P2D1D3D4 and P3D2D3D4, respectively. NOTES
These bits are a ‘0’ if the parity status is okay, and a ‘1’ if it is disturbed. In
that case, ZYX gives the position of the bit that needs correction. The process can
be best explained with the help of an example.
The examination of the first parity relation gives X = 1 as the even parity is
disturbed. The second parity relation yields Y = 1 as the even parity is disturbed
here too. The examination of the third relation gives Z = 0 as the even parity is
maintained. Thus, the bit that is in error is positioned at 011 which is the binary
equivalent of ‘3.’
This implies that the third bit from the MSB needs to be corrected. After
correcting the third bit, the received message becomes 1100110 which is the
correct code.
NOTES
6.8 SUMMARY
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
Material 153
Instruction Codes
BLOCK III
BASIC COMPUTER ORGANIZATION AND DESIGN
NOTES
UNIT 7 INSTRUCTION CODES
Structure
7.0 Introduction
7.1 Objectives
7.2 Instruction Codes
7.2.1 Instruction Formats
7.2.2 Instruction Types
7.3 Computer Registers
7.4 Computer Instructions
7.4.1 Timing and Control
7.5 Answers to Check Your Progress Questions
7.6 Summary
7.7 Key Words
7.8 Self Assessment Questions and Exercises
7.9 Further Readings
7.0 INTRODUCTION
Computers have become inevitable in our lives today. It is essential to know their
usage for all aspects of life and work. Even though there might be certain differences
between one computer and another, the basic organization remains the same. The
hardware used and the codes used for inserting information into the computer may
differ superficially but they are similar in the actions they perform. In this unit, you
will learn about the instruction codes, computer registers, computer instructions.
7.1 OBJECTIVES
A group of bits forms an instruction code that commands the computer to carry
out an operation. The operation part is the most fundamental part of an instruction
Self-Instructional
154 Material
code. It specifies the operation to be performed. An instruction code needs to Instruction Codes
define the operation, the registers or the memory where the operands are located,
and registers or memory word where the result should be stored.
The operands may come from memory, from registers or from the instruction
NOTES
itself.
Computer hardware understands the language of only 1s and 0s, so
instructions are encoded as binary numbers in a format called machine language.
7.2.1 Instruction Formats
The instructions come in only three formats: register (R), immediate (I) and jump
(J), as shown in Figure 7.1.
op rs rt rd sh fn
31 25 20 15 10 5 0
R 6 bits 5 bits 5 bits 5 bits 5 bits 6 bits
Opcod e Source Source Destination Shift Opcode
register 1 register 2 register amount extension
op rs rt operand / offset
31 25 20 15 0
I 6 bits 5 bits 5 bits 16 bits
Opcode Source Destination Imm ediate operand
or base or data or ad dress offset
operand. This register, therefore, holds the initial data to be operated upon,
the intermediate results and final results of processing operations.
Instruction Register (IR): Instructions are loaded in the IR before their
NOTES
execution, i.e., the instruction register holds the current instruction that is
being executed.
A two-step process can be used to define the simplest form of instruction
processing:
1. The CPU reads or fetches instructions or codes from the memory one
at a time.
2. It executes the operation specified by this instruction.
The instruction fetching is done utilizing the program counter (PC). The
tracking of the subsequent instruction that is to be fetched is done by the PC. The
subsequent instruction in the sequence is normally fetched, as the programs
execution is done in sequence. The fetched instruction is in the form of binary code
and is loaded into an instruction register (IR), in the CPU. The CPU then interprets
the instruction and executes the required action. The division of these actions can
be done into following categories:
Data transfer from CPU to memory or memory to CPU or from CPU to I/
O or I/O to CPU.
Data processing an arithmetic or logic operation may be performed on the
data by the CPU.
Sequence Control: This action is required to alter the sequence of
execution. For example, if an instruction from location 50 specifies that the
subsequent instruction to be fetched should be from location 100, and then
the program counter will need to be modified to contain the location 100
(which otherwise would have contained 51).
The primary function of the processing unit in the computer is to interpret the
instructions given in a program and carry out the instructions. Processors are
designed to interpret a specified number of instruction codes. Each instruction
code is a string of binary digits. All processors have input/output, arithmetic, logic,
branch instructions and instructions to manipulate characters. The number and
type of instructions differ from one processor to another. The list of specific
instructions supported by the central processing unit (CPU) is termed as its
instruction set. An instruction in the computer should specify the following:
The task or operation to be carried out by the processor, termed as the
opcode.
Self-Instructional
Material 157
Instruction Codes The address(es) in memory of the operand(s) on which the data processing
is to be performed.
The address in the memory that may store the results of the data-processing
operation performed by the instruction.
NOTES
The address in the memory for the next instruction, to be fetched and
executed. The next instruction which is executed is normally the next
instruction following the current instruction in the memory. Therefore, no
explicit reference to the next instruction is provided.
Instruction Representation
An instruction is divided into a number of fields and is represented as a sequence
of bits. Each of the fields constitutes an element of the instruction. A layout of an
instruction is termed as the instruction format (Figure 7.2).
4 bits 12 bits
In most instruction sets, many instruction formats are used (Table 7.1). An
instruction is first read into an instruction register (IR), then the CPU, which extracts
and processes the required operands on the basis of references made on the
instruction fields, and then decodes it. Since the binary representation of the
instruction is difficult to comprehend, it is seldom used for representation. Instead,
a symbolic representation is used.
Table 7.1 Examples of Typical Instructions
Interpretation Number of
Instruction
Addresses
ADD A,B,C Operation A = B + C is executed 3
ADD A,B A = A + B. In this case the original
2
content of operand location is lost
ADD A AC = AC + A. Here A is added to the
accumulator 1
eight-level hardware stack for PC storage during subroutine calls and input/output
interrupt services.
Arbitrary Waveform Signal
Generator Generator NOTES
I
RF Power RF TX/RX
Amp. Duplexer
Q
Antenna
Control Computer
RX
Front end
Receiver stage
GPIB Bus
IF
IF
Processor
10MHz REF Acquisition Computer
Timing and
SCLK
Control IRX QRX
PRF
Figure 7.3 shows how the clock pulses and control signals are collectively
generated by both the units for the required operation of the radar system, radar
sample clock and the pulse repetition frequency.
The memory control unit (Figure 7.4) works as an interface between the
processor and all the on-chip or off-chip memories. Timing is based on the system
clock which is either an on-board oscillator or an external clock. In either case,
the maximum clock frequency is 50 MHz (megahertz) when using 32-bit TSR
(terminate and stay resident) and 44 MHz when using 64-bit TSR:
Instruction Register Negative
Op-Code Flsg
Ring counter
T0 T1 T2 T3 T4 T5
LDA
Instruction
STA Control
Decoder
ADD Matrix
SUB
MBA
JMP
JN
1. The basic computer has three instruction code formats. The formats are as
follows:
R-type
I-type
J-type
Self-Instructional
160 Material
2. The timing and control unit generates timing and control signals and is Instruction Codes
necessary for the other parts of the CPU. It acts as the brain of the computer
which controls other peripherals and interfaces.
NOTES
7.6 SUMMARY
Self-Instructional
Material 161
Instruction Codes
7.8 SELF ASSESSMENT QUESTIONS AND
EXERCISES
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
162 Material
Instruction Cycle
8.0 INTRODUCTION
In this unit, you will learn about the design of basic computer, instruction cycle,
memory reference instructions and I/O interrupt. The instruction cycle (also known
as the fetch–decode–execute cycle or the fetch-execute cycle) is the basic
operational process of a computer system. It is the process by which a computer
retrieves a program instruction from its memory, determines what actions the
instruction describes, and then carries out those actions. This cycle is repeated
continuously by the central processing unit (CPU), from boot-up till the computer
has shut down.
8.1 OBJECTIVES
You know that computers can store huge amounts of data and are designed to
cater to the end user’s need for speed, accuracy, diligence, versatility and storage
capacity. Their characteristics are as follows:
Self-Instructional
Material 163
Instruction Cycle Speed: The internal processes of computers operate at the speed of light.
This speed is checked only due to the programs controlling these processes
and the amount of data being processed. A computer can perform in a
minute what a human being may require a lifetime to perform. The speed of
NOTES computers is not referred to in terms of seconds or milliseconds. It is referred
to in terms of microseconds (10–6), nanoseconds (10–9) and picoseconds
(10–12).
Accuracy: A computer is extremely accurate. Although there are chances
of errors, they occur mostly due to human error and not due to technological
drawbacks. Errors originate due to imprecise thinking by the programmer
or due to the input of erroneous data. They could also arise due to the poor
design of systems. Garbage in garbage out (GIGO ) is the term used to
refer to computer errors resulting from incorrect data input or due to lack
of reliability of programs.
Diligence: Unlike human beings, computers are capable of working for
long hours without breaks. A computer can perform a million calculations
with accuracy and speed. The speed or level of accuracy will be consistent
and will not deteriorate till the last calculation.
Versatility: Computers can perform any task, as long as it can be broken
down to a series of logical steps. For example, a task such as preparing a
payroll can be reduced to a few logical tasks or operations performed in a
logical sequence. This breaking down of a process into steps facilitates
computerized processing.
A computer does have its limitations also. It can perform only four basic
operations:
(i) It can exchange information with the outside world via input/output
(I/O) devices.
(ii) It can transfer data internally within the CPU.
(iii) It can perform basic arithmetic operations.
(iv) It can perform comparisons.
No intelligence: A computer does not possess any intelligence of its own.
It needs to be told what it has to do and in what sequence.
Information explosion: The speed with which computers can process
information in huge volumes, has resulted in information explosion or
generation of information on a large scale. Human beings have the ability to
sift through data or knowledge and choose to retain only the important
information and forget the irrelevant or unimportant stuff. There is clearly a
difference in the way computers store information and the way human beings
do. The secondary storage capacity of computers assists in storing and
recalling any amount of information. Therefore, it becomes possible to
retain information for as long as desired and recall it whenever needed.
Self-Instructional
164 Material
8.2.1 Basic Anatomy of a Computer Instruction Cycle
The size, shape, cost and performance of computers have changed over the years.
However, the basic logical structure remains the same (Figure 8.1). A computer
system has three essential parts: NOTES
Input device
CPU (consisting of the main memory, the arithmetic logic unit and the control
unit).
Output device
In addition to these basic parts, computers also use secondary storage
devices (also called auxiliary storage or backing storage), used for storing data
and instructions on a long-term basis.
Data bus
Sound card
Motherboard
RAM
known as bits, an abbreviation for binary digits.You will now learn about some
commonly used terms:
Bit: A bit is the smallest element used by a computer. It holds one of the
NOTES
two possible values. (0—off and 1—on)
A bit which is OFF is also considered to be FALSE or NOT SET; a bit
which is ON is also considered to be TRUE or SET. Since a single bit can
only store two values, there could possibly be only four unique combinations
namely,
00 01 10 11
Bits are therefore, combined together into larger units to hold a greater
range of values.
Nibble: A nibble is a group of four bits. This gives a maximum number of
16 possible different values.
24 = 16 (2 to the power of the number of bits)
Bytes: Bytes are a grouping of 8 bits (two nibbles) and are often used to
store characters. They can also be used to store numeric values.
28 = 256 (2 to the power of the number of bits)
Word: Just like we express information in words, so do computers. A
computer ‘word’ is a group of bits, the length of which varies from machine
to machine but is normally pre-determined for each machine. The word
may be as long as 64 bits or as short as 8 bits.
Memory reference instructions (MRI) are 32-bits long, with extra 16 bits. It comes
NOTES from the next successive memory allocation which follows the instruction itself.
The effective memory address is addressed by sign-extending the 16-bit
displacement to 32 bits. Then it adds to the given index register as follows:
ea = r[x] +sxt(disp)
Here ‘ea’ is a variable which contains r[x]. It refers to the program counter
which is indexed. The r[0] index shows the relative address which follows immediate
instructions. This allows easy reference to locate the current program text. All
memory reference instructions share the assembly language formats as follows:
op code Rsrc, Rx, disp dst
opRsrc, label
The first point shows the op code such as Rx, which is one of R1 through R15
and the second is used for system addressing. The assembler automatically computes
display, which is the difference between the current location and addressed label.
Memory reference instructions are those instructions in which two machine
cycles are required. One cycle fetches the instructions and other fetches the data
and executes the instructions. Instructions are based on arithmetic calculations.
Memory reference instructions are used in multi-threaded parallel processor
architecture. These instructions fetch process that two consecutive instructions
are tested to determine if both are register-load instructions or register-save
instructions. If both instructions are register save or load instructions, then
corresponding addresses are tested.
8.4.1 Memory Reference Format
Memory reference instructions are arranged as per the protocols of memory
reference format of the input file in a simple ASCII sequence of integers between
the range 0 to 99 separated by spaces without formatted text and symbols. These
are pure sequences of space-separated integer numbers. For example, |7 4|.
Figure 8.5 shows how 7 4 15 12 … are arranged in the memory
reference format. Here, dst and disp are keywords, where dst represents
destination address and disp refers to the displayed memory space.
_______________ _______________
|_|_|_|_|_|_|_|_| |_|_|_|_|_|_|_|_|
|7 4|3 0| |15 12|11 8|
|1 1 1 1| dst | |0 | x |
_______________________________
|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|
|15 0|
| disp |
Register-to- 0
Register
op dR sR
Memory 0
Reference NOTES
op dR sB Address
Indexed
op dR sX sB Address
The dR and sR fields give the destination register and source register for
an operation. It contains any value between 0 and 7. The sB field indicates the
base/address register and contains the value from 1 to 7. The sX field indicates the
arithmetic/index register and contains the value from 1 to 7. The first two bits of
seven op code are 00, 01 or 10. Instructions start with 11. The op code has two
parts in which the first part indicates the type of number and the second part
shows the operation performed according to the instruction. Table 8.1 (a) shows
the first part of the op code and Table 8.1 (b) shows the second part.
Table 8.1 (a) First Part of Op Code
Address
Long 00 3 3 0 15
Scratchpad op op op3 dR sS
Register-to- 00 3 3 0 16
Scratchpad op op dS op3 sR
Scratchpad-to- 0 0 3 3 0 1 7
Scratchpad op op dS sS
Aux Register 000 3 1 0 I
Memory op
Reference dBR sX sSX sB Address
Self-Instructional
172 Material
A device can be used for identification of an input/output interrupt when it Instruction Cycle
Self-Instructional
Material 173
Instruction Cycle The following screenshot shows how input/output interrupt hardware settings
can be used for Input/Output Ports and Interrupt Request. The values estimated
by Windows are not correct.
NOTES
The next screenshot shows that when the values estimated by Windows are
changed, then it works properly.
8.7 SUMMARY
The input unit performs the process of transferring data and instructions
from the external environment into the computer system.
Processing refers to the performing of arithmetic or logical operations on
data, to convert them into useful information. Arithmetic operations include
operations of add, subtract, multiply, divide and logical operations are
operations of comparison like less than, equal to, greater than.
The control unit and arithmetic logic unit are together known as the central
processing unit.
Memory reference instructions are those instructions in which two machine
cycles are required. One cycle fetches the instructions and other fetches the
data and executes the instructions.
Input/output interrupt is an external hardware event which causes the CPU
to interrupt the current instruction sequence. It follows an interrupt
mechanism to call the special interrupt service routine (ISR). Input/output
interrupt services save all the registers and flags. They also restore the
registers and flags and then resume the execution of the code they interrupted.
Self-Instructional
Material 175
Instruction Cycle
8.9 SELF ASSESSMENT QUESTIONS AND
EXERCISES
Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.
Self-Instructional
176 Material
Introduction to CPU
BLOCK IV
CENTRAL PROCESSING UNIT
NOTES
UNIT 9 INTRODUCTION TO CPU
Structure
9.0 Introduction
9.1 Objectives
9.2 Organization of CPU Control Registers
9.2.1 Organization of Registers in Different Computers
9.2.2 Issues Related to Register Sets
9.3 Stack Organization
9.4 Answers to Check Your Progress Questions
9.5 Summary
9.6 Key Words
9.7 Self Assessment Questions and Exercises
9.8 Further Readings
9.0 INTRODUCTION
In this unit you will learn about the organisation of CPU control registers and stack
organisation. A processor contains the different types of registers that are used for
holding the information related to the execution of instruction. For example program
counter holds the address of next instruction to be fetched. A stack is a storage
device that stores information in a last-in-first-out (LIFO) fashion. Only two type
of operations are possible in a stack, namely push and pop operations. Push
places data onto the top of stack, while pop removes the topmost element from
the stack. These operations can be used explicitly for execution of a program.
9.1 OBJECTIVES
Self-Instructional
Material 177
Introduction to CPU
9.2 ORGANIZATION OF CPU CONTROL
REGISTERS
NOTES The main components of the Central Processing Unit (CPU) are as follows:
Control unit (CU): The basic role of CU is to decode/execute instructions.
It generates the control/timing signals that trigger the arithmetic operations
in ALU and also controls their execution.
Arithmetic and logic unit (ALU): It is used for executing mathematical
operations, such as *, /, + and – ; logical operations , such as AND and
OR; and shift operations, such as rotation of data held in data registers.
Clock: There is a simple clock, a pulse generator, that helps to synchronize
the CU operations so that the instructions are executed in proper time. A
processor’s speed is measured in hertz, which is the speed of the computer’s
internal clock. The higher the hertz number, the faster is the processor.
Registers: A CPU consists of several operational registers used for storing
data that are required for the execution of instructions.
The design of CPU in its modern form was first proposed by John von
Neumann and his colleagues for the Institute for Advanced Studies (IAS) computer.
The IAS computer had a minimal number of registers along with the essential
circuits. This computer had a small set of instructions with each instruction having
two parts: opcode and operand. It was allowed to contain only one operand
address.
The simplest machine has one general-purpose register, called accumulator
(AC), which is used for storing the input or output operand for ALU. ALU directly
communicates with AC.
Address Bus M
A
R
PC
IR
X A
ALU C
Y C
Data Bus
Self-Instructional
Material 179
Introduction to CPU 9.2.1 Organization of Registers in Different Computers
How the various components of control registers are connected to one another
and how they communicate data among themselves is shown in Figure9.2. From
NOTES a user’s point of view, the register set can be classified under the following two
basic categories: programmer-visible registers and status and control registers. A
brief description of the two categories has been given in the following lines.
Program-control
unit
Address Instruction
register register
To main memory and IO devices
Control
System
bus Address
control Data
Data processing
Data unit Status
register (execution unit) register
General-purpose Arithmetic-logic
registers unit
Programmer-visible registers
These registers can be used by machine or assembly language programmers to
hold all temporary data to minimize the reference to main memory. Virtually, all
CPU designs provide for a number of user-visible registers, unlike a single
accumulator, as proposed for IAS computer.
Programmer-visible registers can be accessed using machine language. The following
are the various types of programmer-visible registers.
(i) General-purpose register: The general-purpose registers are used for
various functions as required by the programmer. A true general-purpose
register can contain operand for any opcode address or can be used for the
calculation of address operand for any operation code of an instruction.
Self-Instructional
180 Material
But today’s trend favours machines having dedicated registers. For example, Introduction to CPU
Self-Instructional
Material 181
Introduction to CPU (i) Program counter (PC): PC is a register that holds the address of the next
instruction to be read from memory. The PC increments after each instruction
is executed and causes the computer to read the next instruction of program
which is stored sequentially in the main memory. In case of a branch
NOTES instruction, the address part is transferred to PC to become the address of
the next instruction. To read an instruction, the content of PC is taken as the
address for memory and a memory read cycle is initiated. PC is then
incremented by one. So, it holds the address of the next instruction in
sequence. Number of bits in the PC is equivalent to the width of a memory
address.
(ii) Instruction register (IR): IR is used to hold the opcode of instruction that
is most recently fetched from memory.
(iii) Status or flug register: Almost all the CPUs, as discussed earlier, have a
status register (also called flag register or processor status word), a part
of which may be programmer visible. A register which may be formed by
condition codes is called a condition code register. It stores the information
obtained from execution of the previous condition instructions it depends
on the test result of a conditional branch instruction the execution flow of
the program can be altered.
Some of the commonly used flags or condition codes stored in such a register
are:
Sign flag: Sign bit will be set according to the sign of previous arithmetic
operation, whether it is positive (0) or negative (1).
Zero flag: Flag bit will be set if the result of the last arithmetic operation
was zero.
Carry flag: Carry bit will be set if there is a carry result from the addition
of the highest order bits or a borrow is taken from subtraction of the
highest order bit.
Equal flag: This bit flag will be set if a logic comparison operation finds
out that both of its operands are equal.
Overflow flag : This flag is used to indicate the condition of arithmetic
overflow.
Interrupt enable/disable flag: This flag is used for enabling or disabling
interrupts.
Supervision flag: This flag is used in certain computers to determine
whether the CPU is executing in supervisor mode or user mode. It is
important as certain privileged instructions can be executed only in
supervisor mode, and certain areas of memory can be accessed only in
supervisor mode.
In most CPUs, on encountering a subroutine call or interrupt handling routine,
it is desired that the status information, such as conditional codes and other register
Self-Instructional
182 Material
information, be stored so that it can be restored once that subroutine is over. The Introduction to CPU
register that stores condition code and other status information is known as program
status word (PSW). Along with PSW, a computer can have several other status
and control registers, such as interrupt vector register in the machines using vectored
interrupt, a stack pointer, if a stack is used to implement subroutine calls, etc. The NOTES
design of status and control register also depends on the operating system (OS)
support. Hence, it is always advisable to design register organization based on the
principles of operating system as there are some control information that are only
of specific use to the operating system and hence depends on the operating system
that we are using. In some machines, the processor itself coordinates the subroutine
call, which will result in the automatic saving of all user-visible registers and restoring
them back on return. This allows each subroutine to use the user-visible registers
independently. While in other machines, it is the responsibility of the programmer
to save the contents of the relevant user-visible registers prior to a subroutine call.
Thus, in the second case we must include instructions that can implement the
saving of the data in the program.
However, a clear separation of registers into these two categories does not
exist. For example, on some machines, the program counter is user-visible
(e.g.VAX ),while it is not so in case of other machines.
9.2.2 Issues Related to Register Sets
The operating system design is an important issue for designing the architecture of
the control and status register organization. As control information is specifically
used by the operating system, the CPU design is somewhat dependent on operating
system. While designing the set of registers, there are few more issues, such as:
Should one use general-purpose registers or dedicated registers in a
machine?
In case of specialized registers, the number of bits needed to specify a register is
reduced, as one has to specify only few registers out of a set of registers. With the
use of specialized registers, it can generally be implicit in the opcode which type of
register a certain operand specifier refers to. The operand specifier must only
identify one of a set of specialized registers rather than one out of all the registers,
thus saving bits. Similar data can be stored either in AC or DR, out of possible 8
registers in a basic computer, as discussed earlier. However, this specialization
does not allow much flexibility to the programmer. Although there is no best solution
to this problem, yet the latest trends favour the use of a specialized register.
How many registers should be used?
Another issue related to the register set design is the number of general-purpose
register to be used. The number of register affects the instruction set as it determines
the type of addressing mode. The number of register determines the number of
bits needed in an instruction to specify a register reference. In general, it has been
found that optimum number of register in a CPU is in the range of 8 to 31. More
Self-Instructional
Material 183
Introduction to CPU the numbers of registers used for storing the temporary results, the lesser will be
the memory references. As the number of memory references is decreased, the
speed of execution of a program increases. But it is observed that if the number of
registers goes above 31, then there is no appreciable reduction in memory
NOTES references. However, there are systems like Reduced Instruction Set Computers
(RISC) where hundreds of registers are used. Here, a very simple instruction set
architecture is used so that an overall high system is obtained.
What should be the length of the register?
Another important characteristic related to register is its size. Normally, the length
of a register depends on the purpose for which it is designed. For example, a
register that holds addresses like AR should be long enough to hold the largest
address. Similarly, the length of data register like DR and AC should be long
enough to hold the data type it is supposed to hold. In certain cases, two consecutive
registers may be used to hold data whose length is double the register length.
How should the control information be stored? Should it be accessible to
the programmer?
Status information requires only few bits to store. Condition code register that
may be partially visible to the programmers and holds condition codes of various
flag status are discussed earlier. These flags are set by the CPU as the results of
operations. For example, an arithmetic operation may produce a positive, negative,
zero or overflow result, e.g., on dividing a number by 0, the overflow flag can be
set. These codes may be tested by a program for the conditional branch operation.
The condition codes are collected in one or more registers. RISC machines have
several sets of conditional code bits. In these machines, an instruction specifies the
set of condition codes, which is to be used. Condition codes form a part of a
control register. Generally, machine instructions allow conditional code bits to be
read by implicit reference, but they cannot be altered by the programmer.
How should control information be allocated between registers and the
memory?
As it is not possible to store all control information in registers, memory has to be
used. It is common to dedicate the first few thousand words of memory for storing
control information. The designer must decide how much control information should
be in registers and how much in memory. There is always a trade off between the
cost and the speed.
The first decision to be taken in ISA design is the type of internal storage in the
CPU. The three major choices are: a stack, an accumulator and a register set.
Early computers used the stack and accumulator architectures. Let us study how
a stack organization works.
Self-Instructional
184 Material
A stack is a storage device that stores information in a last-in-first-out (LIFO) Introduction to CPU
fashion. Only two type of operations are possible in a stack, namely push and pop
operations. Push places data onto the top of stack, while pop removes the topmost
element from the stack. These operations can be used explicitly for execution of a
program and in some cases the operating system implements it implicitly, such as NOTES
in subroutine calls and interrupts, as discussed earlier. Some computers reserve a
separate memory for stack operations. However, most computers utilize main
memory for representing stacks. For accessing data stored in a stack, we need a
stack pointer (SP) register. The SP register is initially loaded with the address of
the top of the stack. Each instruction pops its source operands off the stack and
pushes its result on the stack. In memory, the stack is actually upside-down.
So, when something is pushed onto the stack, the stack pointer is
decremented.
SP SP – 1
M[SP] DR
When something is popped off the stack, the stack pointer is incremented.
DR M[SP]
SP SP + 1
While using stack architecture (Figure 9.3), one must ensure that an overflow
or underflow does not happen while performing stack operations as these
conditions lead to loss of information. Let us study the following example to
understand how stack organization implements the addition of two variables.
Example: To store the sum of memory variables A and B into location C of memory:
Push A: Copy A from memory and push it on the stack.
Push B: Copy B from memory and push it on the stack.
Add : Pop the top two stack items and push their sum on the stack.
Pop C: Pop the top of stack and store it in memory location C.
Stack uses Reverse Polish Notation (RPN) to solve arithmetic expression.
RPN is a way of representing arithmetic expressions. It avoids the use of brackets
to define priorities for evaluation of operators. In RPN scheme, the numbers and
operators are listed one after another. The architecture of a stack can be thought
as a pile of plates. The operations are performed by applying operator on the
most recent numbers, that is, on the top of the stack. An operator takes the
appropriate number of arguments from the top of the stack and replaces them
with the results of the operation. In ordinary notation, one might write
(8 + 9) * (5 – 2)
The brackets tell us that we have to add 8 and 9, subtract 2 from 5, and then
multiply the two results together. In this notation, the above expression would be:
89+52–*
Self-Instructional
Material 185
Introduction to CPU First, the given notation of the operation is converted into ‘reversed Polish notation
(RPN),’ a notion founded by Polish philosopher and mathematician Jan
Lukasiewicz:
(A+B)*(C+D) = AB + CD + *
NOTES
Then execute this program:
PUSH A
PUSH B
ADD
PUSH C
PUSH D
ADD
MUL
POP X
B C
A A A+B A+B
Push A Push B Add Push C
D
C C+D
A+B A+B (A + B) (C – D)
Push D Add Multiply Pop X
Push Pop
Bootom
Self-Instructional
186 Material
It has a simple architecture. It requires only one dedicated register (SP) and Introduction to CPU
Self-Instructional
Material 187
Introduction to CPU
9.5 SUMMARY
Self-Instructional
Material 189
Instruction Formats
10.0 INTRODUCTION
10.1 OBJECTIVES
Self-Instructional
190 Material
Instruction Formats
10.2 INSTRUCTION FORMATS
Every instruction is represented by a sequence of bits and contains the information
required by the CPU for execution. Depending on the format of instruction, each NOTES
instruction is divided into fields, with each field corresponding to some particular
interpretation. A general instruction format is given in Figure 10.1.
Opcode-field Address-field
2. 2-address format
The 2-address format instruction has two addresses in the instruction, both input
or source addresses. One of the input registers is used as the destination address
in which output has to be stored. It can be represented as:
dst [dst] * [src]
Where src is the source operand, dst is the destination operand and * represents
the operation specified in opcode field. An example of the execution of instruction
using two address formats has been shown in Figure 10.3:
Self-Instructional
192 Material
Instruction Formats
MOV R1 A R1 [A]
ADD R1 B R1 [B] + [R1]
MOV R2 C R2 [C] NOTES
ADD R2 D R2 [D] + [R2]
MUL R2 R1 R2 [R1] * [R2]
MOV X R2 X [R2]
3. 1-address format
Only one address is used both as source as well as destination. It usually uses an
implied accumulator (AC) (Figure 10.4).
LOAD A AC [A]
ADD B AC [AC] + [B]
STORE R R [AC]
LOAD C AC [C]
ADD D AC [AC] + [D]
MUL R AC [AC] *[R]
STORE X X [AC]
Let us consider the execution of the following instruction, using different addressing
schemes.
R = ( A+ B ) / (C – D × E):
(i) With one-address instructions (requiring an accumulator AC)
LOAD D AC D
MUL E AC AC × E
ADD C AC AC – C
STOR Y Y AC
LOAD A AC A
SUB B AC AC + B
DIV Y AC AC / Y
STOR Y Y AC
SUB Y, A, D YA+B
MUL T, D, E TD×E
ADD T, T, C TT–C NOTES
DIV Y, Y, T YY/T
10.3ADDRESSING MODES
The instruction set is an important aspect of any computer organization. Every
instruction has primarily two components: opcodes and operands. You will learn
how to get operands on which all the manipulations are to be preformed. A simple
ADD operation along with opcode must also provide the information about how to
fetch the operands and where to put the result. Operands are commonly stored
either in main memory or in the CPU registers. If operand is located in the main
memory, the location address has to be given the instruction in the operand field.
Thus, if memory addresses are 32 bits, a simple ADD instruction will require three
32 bits-addresses in addition to opcode. The recent architecture provides a large
number of registers so that compilers can keep local variables in registers, eliminating
memory references. This results in a reduced program size and execution time.
As it is not possible to put all variables in registers, a memory reference is
required. It attempts to refer a large range of locations in main memory or, even
for some systems, virtual memory. One possibility is that they contain the memory
address of the operand but this will require large field to specify full memory
address. Also, the address must be determined at compile-time. Other possibilities
also exist, which provide both shorter specifications and the ability to determine
addresses dynamically. To achieve this objective, a variety of addressing techniques
have been employed. These techniques trade off between address range and/or
addressing flexibility, on the one hand, and the number of memory references and/
or complexity of address calculation, on the other. Basically, what an operand
stores is the effective address. The effective address (EA) of an operand is the
address of (or the pointer to) the main memory or register location in which the
operand is contained, i.e., operand = EA. There are two ways by which the control
unit determines the addressing mode used by an instruction:
(i) The opcode itself explicitly specifies the addressing mode used in the
instruction.
(ii) The use of a separate mode field in the instruction indicates the addressing
mode used.
The various modes of addressing are dicussed as follows:
1. Implied Mode
The operand is specified implicitly in the definition of the instruction as in the case
of an accumulator. Only the accumulator holds the operand and a stack organization
Self-Instructional
Material 195
Instruction Formats where the operand is the data stored on the top of stack. In both the cases, only
one operand is available for manipulation. So, an instruction just tells us about the
opcode and no field is required for operand, as shown in Figure 10.5.
NOTES IR Op
Fig. 10.5 Implied Addressing Mode
2. Immediate Addressing Mode
Immediate addressing is the simplest form of addressing where the operand is
actually present in instruction, i.e., there is no operand fetching activity as the
operand is given explicitly in the instruction. This mode can be used to define and
use constants or set initial value variables. An example is given as follows:
MOV 15, R1 (Load binary equivalent of 15 in register R1)
ADD 15, R1 (Add binary equivalent of 15 in R1 and store the result in R1)
ADD 5 (Add binary equivalent of 5 to contents of accumulator)
Advantage: The advantage of immediate addressing is that no memory reference
other than fetching of the instruction is required. As no memory reference is required
to obtain the operand, it has very small instruction cycle. Also, it is fast as memory
reference is reduced to one. It is commonly used to define and use constants, or
set initial values.
Disadvantage: The disadvantage of immediate addressing (Figure 10.6) is that
the size or the number of operations is the same as that of the address field,
which, in most instruction sets, is small as compared to the word length. Further, it
has a limited utility.
IR Op Operand
Fig. 10.6 Immediate Addressing Mode
3. Absolute Mode
In this mode, the operand’s address is explicitly given in the instruction. This address
can be in either a register or in a memory location, i.e., the effective address (EA)
of the operand is given in the instruction. Figure 10.7 shows the absolute mode of
addressing.
IR Op EA IR
Op EA Operand
R Operand
Instrucion
A
Memory
NOTES
Operand
Operand
In the direct addressing mode, the length of the address field is usually less
than the word length. Thus, there is a limited address range. To overcome this
problem, one can use the address field that refers to the address of a word in
memory, which, in turn, contains a full-length address of the operand. The obvious
advantage of this approach is that for a word of length N, an address space of 2N
is available. Its disadvantage is that the instruction execution requires two memory
references to fetch the operand: one to get its address and the other to get its
value.
Although the number of words that can be addressed in this mode is equal
to 2N, the number of different effective addresses that may be referenced at any
one time is limited to 2 K, where K is the length of the address field. In a virtual
memory environment, all the effective address locations can be confined to page 0
of any process. Because the address field of an instruction is small, it will naturally
produce the low-numbered direct addresses, which would appear in page 0. When
a process is active, there will be repeated references to page 0, causing it to
remain in main memory. Thus, an indirect memory reference may involve more
than one page fault.
A rarely used variant of indirect addressing is multilevel or cascaded indirect
addressing:
EA = (…..(A)…..)
Self-Instructional
198 Material
In this case, one bit of a full-word address is an indirect flag (I). If the I bit Instruction Formats
is 0, then the word contains EA. If the I bit is 1, then another level of indirection is
invoked. There does not appear to be any particular advantage to this approach.
However, its disadvantage is that three or more memory references could be
required to fetch an operand in it. The multiple memory accesses to find an operand NOTES
makes it slower.
Register indirect addressing
Just as register addressing is analogous to direct addressing, register indirect
addressing is analogous to indirect addressing. In both cases, the only difference is
whether the address field refers to a memory location or to a register. Thus, for a
register indirect address:
EA = (R)
The advantages and disadvantages of register indirect addressing are basically
the same as of indirect addressing. In both the cases, the address space limitation
(limited range of address) of the address field is overcome by referring that field to
a word-length location containing an address. In addition, register indirect addressing
uses one less memory reference than indirect addressing (see Figure 10.11).
Instruction
R
Memory
Operand
Registers
5. Displacement Addressing
Displacement addressing is a very powerful mode of addressing. It combines the
capabilities of direct addressing and register indirect addressing. It is known by a
variety of names depending on the context of its use. However, the basic mechanism
is the same.
EA = A + (R)
Displacement addressing (see Figure 10.12) requires that the instruction
should have two address fields, in which at least one is explicit. The value contained
in one address field is used directly, as in above case. The other address field can
be an implicit reference based on opcode, which refers to a register whose contents
are added to A to produce the effective address.
Self-Instructional
Material 199
Instruction Formats Instruction
R A
Memory
NOTES
Operand
Registers
6. Stack Addressing
The final addressing mode that we consider is stack addressing. A stack is a linear
array of locations. It is sometimes referred to as a push-down list of last-in-first-
out queue. It is a reserved block of locations. Items are appended to the top of the
stack so that, at any given time, the block is partially filled. Associated with the
stack is a pointer whose value is the address of the top of the stack. The stack
pointer is maintained in a register. Thus, references to stack locations in memory
are in fact register indirect addresses.
The stack mode of addressing is a form of implied addressing. The machine
instructions need not include a memory reference but should implicitly operate on
the top of the stack.
Another important issue is how to determine the addressing mode to be
followed. Virtually all computer architectures provide more than one addressing
mode. The question arises as to how the control unit can determine the address
mode to be used in a particular instruction. Several approaches are taken. Often,
different opcodes will use different addressing modes (Table 10.2). Also, one or
more bits in the instruction format can be used as a mode field. The value of the
mode field determines which addressing mode is to be used.
Table 10.2 Various Addressing Modes
Notation:
A = Contents of an address field in the instruction
R = Contents of an address field in the instruction that refers to a register
Self-Instructional
200 Material
EA = Effective (actual) address of the location containing the referenced Instruction Formats
operand
(X) = Contents of location X
NOTES
Check Your Progress
1. What is the characteristic feature of the 3-address format?
2. Define immediate addressing.
3. State the major advantages and disadvantages of the direct addressing
mode.
4. What do you understand by a stack?
Designing an instruction set for a system is a complex art. A variety of designs are
possible, each having its own trade off and advantages. Major concerns in designing
an instruction set are as follows:
10.4.1 Length of Instructions
The length of an instruction set depends on its memory size, bus architecture,
CPU complexity, etc. It should be the same as the number of bytes transferred
from memory in one cycle; otherwise, more fetch cycles would be required to
fetch a single instruction, creating a bottleneck at memory. Also, it is mandatory
that the instruction length should be multiple of character length, i.e., 8 bytes.
Some programs want a complex instruction set containing more instructions, more
addressing modes and greater address range, as in case of CISCs. Other
programmers, on the other hand, want a small and fixed-size instruction set that
contains only a limited number of opcodes, as in case of RISCs. The instruction
set can have variable-length instruction format primarily due to: (i) varying number
of operands, and (ii) varying lengths of opcodes in some CPUs.
10.4.2 Allocation of Bits
For a given length of instruction, the question arises as to how much bits should be
allocated for storing the opcodes and how many bits are required for storing operand
or its address. This allocation depends on the various factors, such as:
(i) Number of addressing modes: What type of addressing mode is
employed?
(ii) Number of operands used in an instruction: Today’s computers generally
provide a two-operand format.
Self-Instructional
Material 201
Instruction Formats (iii) Register or memory: Whether an operand is stored in register or memory.
(iv) Number of register sets used: The number of registers used has a great
impact on the design of an instruction set architecture and overall performance
of a computer. When we increase the number of registers, we will notice
NOTES
the following:
The number of memory references used reduces as, now, more
frequently used data can reside in register, resulting in a lesser memory
reference.
There is an increase in the size of an instruction word.
There are greater demands on the compiler to schedule registers.
(v) Address range: The number of bits that can be referred in a computer is
related to the number of address bits.
(vi) Number of operations: If a large number of operations are designed,
then the number of bits of the instruction that are allocated for storing
opcodes will be more. For example, a basic computer using 4 bits possibly
generates 16 types of operations, and if a system requires 64 possible
operations, we have to allocate 6 bits to store an opcode.
(vii) Address granularity: An address can refer to a word or a byte depending
on the designer choice.
An instruction set is a group of bits that instruct the computer to perform a
specific operation. Each instruction comprises several microoperations. The
instruction is usually divided into parts, each part having different interpretations.
As said earlier, an instruction provides the operation code and information about
the operand on which this operation is executed. The operand on which an operation
is executed can be operand itself or it can be the address where operand is stored
(depending on the instruction format discussed earlier).
The most basic part of an instruction code is its operation part which specifies
what operation has to be performed. The operation code is a group of bits that
define such operations as add, subtract, AND, OR, move, shift, complement,
jump, etc. The number of bits required for the operation code must be large enough
to identify all operations. Thus, it depends on the total number of operations
available in the computer. In a system having M distinct operations such that M =
2n (or less), the operation code must consist of at least n bits. For example, if the
basic computer has 16 different operations, 4 bits are required for representing
opcode.
An instruction code must specify the address of the operand. It can be the
address of main memory if the operands are stored in main memory, or it can be
the address of the register in case of register-addressing modes. There are various
ways of arranging the binary code of instructions. Each computer has its own
instruction code format. However, an instruction set should satisfy the following
general rules:
Self-Instructional
202 Material
(i) Completeness: One should be able to test with a machine-language Instruction Formats
Opcode Operands
Fig. 10.13 Instruction Format
In this case, as the opcode has 4-bit field, it is possible to have only 16 instructions.
The instruction set is a link between a hardware and software; it reflects the
programmer’s view of system state, the primitive operands and the basic operation
to be preformed on operands. Different types of computers have different instruction
sets. There are several options while selecting an instruction set, such as:
(i) Choosing a minimal yet complete set.
(ii) Choosing instructions based on their speed. (Thus, an instruction set should
comprise small instructions and also less memory access instructions. Such
instruction sets are used in the RISC architecture).
(iii) Choosing an elaborate instruction set encapsulating frequent instructions,
as in case of the CISC architecture.
An instruction can be considered as a function, which is defined to be
computable if it can be calculated in a finite number of steps by a Turing machine.
A simple CPU differs widely from a Turing machine. All processors have a finite
and small amount of memory. Therefore, you should choose an appropriate
instruction set to minimize logic circuits complexity. However, this choice can lead
to excessively complex programs. So, there is a fundamental compromise between
CPU simplicity and programming complexity.
Self-Instructional
Material 203
Instruction Formats Let us consider a simple high-level language statement: X = X + Y
If we assume a simple set of machine instructions, this operation could be
accomplished with three instructions: (assume X is stored in memory location 622,
and Y in memory location 625.)
NOTES
Take the following steps:
Load a register with the contents of memory location 622.
Add the contents of memory location 625 to the register.
Store the contents of the register in memory location 622.
10.4.3 Types of Instructions
In general, all instructions fall into the following three categories:
Data transfer instructions: The data transfer operations are concerned
with transfer of data between the various components of computer, such as
data transfer between two registers or between a register and the main
memory or from a register to any circuit (such as ALU) in the processor.
This transfer is usually done by the common bus architecture.
Data manipulation instructions: Such an instruction performs all data
manipulation operations, such as add, subtract or logical operation. These
operations are executed by ALU of the processor.
Program control instructions: Such an instruction basically controls the
flow of instructions within a program that depends on the decision parameter.
Miscellaneous instructions: These instructions are not used as frequently
as data movement instructions.
Here, you will study each of these in detail.
1. Data Transfer Instructions
As we know, all processes under execution reside in the main memory in
the form of binary information and all the computations of instructions stored
in these programs are done in processor registers. Therefore, the user must
be capable of moving information between these two units. Data transfer
instructions are used to transfer or copy data from one location to another
in registers or in external main memory or in input–output devices without
changing its binary content, i.e., information stored in it. These allow the
processor to move data between registers and between memory and
registers (e.g., 8086 microprocessor has mov, push, pop instructions). A
‘move’ instruction and its variants are among the most frequently used
instructions in an instruction set. This data transfer can be categorized in the
following types:
Processor register–memory: Data may be transferred from processor
to memory and vice versa.
Self-Instructional
204 Material
Processor register–I/O: Data may be transferred to or from a peripheral Instruction Formats
Self-Instructional
206 Material
The bits manipulation operations primarily involve three actions: Instruction Formats
MSB
LSB
MSB
LSB
7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 1
0 0 1 0 1 1 1 0 0 0 0 0 0 0 1 0 1 1
Logical shift right Logical shift left
Fig. 10.14 An 8-bit Logical Shift Register
LSB
7 6 5 4 3 2 1 0 0
0 0 0 1 0 1 1 1 1
1 0 0 0 1 0 1 1 1
LSB
7 6 5 4 3 2 1 0 0
0 0 0 1 0 1 1 1 1
0 0 1 0 1 1 1 1 0
Self-Instructional
Left rotate through carry
208 Material
Instruction Formats
MSB
LSB
7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1
NOTES
1 0 0 0 1 0 1 1
Fig. 10.15 Circular Shift Rotate without Carry and through Carry
Self-Instructional
Material 209
Instruction Formats
MSB
LSB
7 6 5 4 3 2 1 0
0 0 0 1 0 1 1 1
NOTES
0 0 0 0 1 0 1 1
Self-Instructional
210 Material
RTL Description Instruction Formats
Self-Instructional
Material 211
Instruction Formats Table 10.7 Common Program Control Operations
S.No. Name Mnemonic Action taken
1 Branch BR Branches to particular
location
2 Jump JMP Jumps to particular location
NOTES 3 Skip SKP Skips next instruction
4 Call CALL Calls a subroutine
5 Return RET Returns from subroutine to
main program
6 Compare (by subtraction) CMP Compares values by doing
subtraction
7 Test (by ANDing) TST Tests two or more conditions
by ANDing
Self-Instructional
212 Material
Instruction Formats
1. The 3-address format instruction has three addresses in the instruction; two
are input or source addresses and the third is the destination address in
which output has to be stored.
2. Immediate addressing is the simplest form of addressing where the operand
is actually present in instruction, i.e., there is no operand fetching activity as
the operand is given explicitly in the instruction. This mode can be used to
define and use constants or set initial values.
3. The major advantage of the direct addressing mode is that for a word of
length N, an address space of 2N is available. Its main disadvantage is that
the instruction execution requires two memory references to fetch the
operand: one to get its address and the other to get its value.
4. A stack is a linear array of locations. It is sometimes referred to as a push-
down list of last-in-first-out queue. It is a reserved block of locations. Items
are appended to the top of the stack so that, at any given time, the block is
partially filled.
5. The data manipulation instruction performs all data manipulation operations,
such as add, subtract or logical operation. These operations are executed
by the arithmetic logic unit of the processor. The program control instruction,
on the other hand, controls the flow of instructions within a program that
basically depends on the decision parameter.
6. Data transfer can be categorized in the following types:
Processor register-memory: Data may be transferred from processor
to memory and vice versa.
Processor register-I/O: Data may be transferred to or from a peripheral
device by transferring the content of processor register to an I/O
module and vice versa.
Processor register: Data transfer internally among the registers of the
processor.
Self-Instructional
Material 213
Instruction Formats 7. In logical shift operation, 0 is transferred as the serial input. It can be right
or left, depending on the whether 0 is entered through the least significant
bit or through the most significant bit, respectively. The arithmetic shift assumes
that the data being shifted is an integer in nature. Hence, in the result, the
NOTES sign bit is not shifted, maintaining the arithmetic sign of the shifted result. In
an arithmetic shift, the bits that are shifted out of either end are discarded.
10.6 SUMMARY
Self-Instructional
Material 215
Instruction Formats 3. What do you understand by arithmetic instructions? Explain the features of
some typical arithmetic instructions.
4. Explain program control instructions.
NOTES
10.9 FURTHER READINGS
Mano, M. Morris. 1992. Computer System Architecture. New Delhi: Prentice-
Hall.
Mano, M. Morris. 2000. Digital Logic and Computer Design. New Delhi:
Prentice-Hall.
Mano, M. Morris. 2002. Digital Design. New Delhi: Prentice-Hall.
Stallings, William. 2007. Computer Organisation and Architecture. New Delhi:
Prentice-Hall.
Self-Instructional
216 Material
Input-Output Organization
UNIT 11 INPUT-OUTPUT
ORGANIZATION
NOTES
Structure
11.0 Introduction
11.1 Objectives
11.2 Peripheral Devices
11.2.1 Storage Devices: Hard Disk
11.2.2 Human-interactive I/O Devices
11.3 Input/Output (I/O) Interface
11.3.1 Problems in I/O Device Management
11.3.2 Aims of I/O Module
11.3.3 Functions of I/O Interface
11.3.4 Steps in I/O Communication with Peripheral Devices
11.3.5 Commands Received by an Interface
11.4 Asynchronous Data Transfer
11.4.1 Strobe Control
11.4.2 Handshaking
11.4.3 Asynchronous Serial and Parallel Transfers
11.5 Modes of Data Transfer
11.6 Answers to Check Your Progress Questions
11.7 Summary
11.8 Key Words
11.9 Self Assessment Questions and Exercises
11.10 Further Readings
11.0 INTRODUCTION
In this unit you will learn about the peripheral devices and I/O interface. There are
a variety of input/output (I/O) devices available in the market. These devices are
also known as peripheral equipment as they are attached to a computer externally,
i.e. they are not a part of the motherboard. The various I/O hardware devices
available for different purposes are storage devices (disk), transmission devices
(network cards and modems) and human-interface devices (screen, keyboard
and mouse). Some devices may be used for more than one activity, e.g. a disk can
be used both for input and output. Input devices are used for receiving the data
from a user and transferring it to the central processing unit (CPU). Output devices
receive data from the CPU and present it to the end user.
An I/O interface is an entity that controls the data transfer from external
device, main memory and/ or CPU registers. You can say that it is an interface
between the computer and I/O devices (external devices) and is responsible for
managing the use of all devices that are peripheral to a computer system. You will
also learn the asynchronous data transfer.
Self-Instructional
Material 217
Input-Output Organization
11.1 OBJECTIVES
The peripheral devices can be thought of as transducers, which can sense physical
effects and convert them into machine-tractable data. For example, a computer
keyboard, which is one of the most common input devices, accepts input by the
pressing of keys, or by physically moving cursor using mouse. Such physical actions
produce a signal that the processor translates into a byte stream or bit signal so
that it can understand it. Similarly, if we consider an output device like a computer
monitor screen, it accepts a bit stream generated by a processor which is further
translated into the signal that controls the movement of the electronic beam that
strikes the screen. The pixel combination produces a picture on the monitor screen.
Some devices mediate both input and output, e.g. memory or a disk drive.
The various types of I/O devices have been discussed here after.
11.2.1 Storage Devices: Hard Disk
A hard disk is one of the important I/O devices and is most commonly used as
permanent storage device in any processor. Due to improvement in technology
and density of magnetic disk, it has become possible to have disks with larger
capacity and at a cheaper rate.
Diskette (soft disk, floppy disk)
It is a 10.5-inch diskette with a capacity of 1.44 MB. The architecture is similar to
that of a hard disk, i.e. it is divided into concentric tracks, which are further divided
into sectors.
Magnetic tape
A magnetic tape consists of a plastic ribbon with a magnetic surface. The data is
stored on the magnetic surface as a series of magnetic spots.
Optical disk
A variety of optical disks are available in market, e.g. CD-ROM, DVD having
storage capacities in the range of 128 MB to 1 GB, etc. These disks read the data
by reflecting pulses of laser beams on the surface. It is usually written once with a
high-power laser that burns spots in a dye layer and turns it dark that appears as
pit on the surface. Such pits are read by a laser beam that reflects into a
phototransistor. Due to variations in the thickness of the disk, vibrations, etc. a
Self-Instructional
218 Material
focusing lens is used to image the pits onto the phototransistor.
USB flash drives (commonly called pen drives) Input-Output Organization
These are typically small, lightweight, removable and rewritable. They are one of
the most popular modes used for data transfer because they are more compact
and generally faster, able to hold more data and more reliable (due to their lack of
NOTES
moving parts and their more durable design) compared to the floppy disks. These
are NAND-type flash memory data storage devices integrated with a universal
serial bus (USB) interface.
Magneto-optical disk
A magneto-optical disk is based on the same principle as the optical disk. Both
have capacities in the range of 128 MB, 230 MB, 1.3 GB. The only difference is
that it uses a layer of magnetic grains that are reoriented by the magnetic write
head so that they either block or allow light to reflect off of the backer. As in a
floppy disk, the read-write media is stored in a self-sealing rigid case. The time
required to access the data is 16 to 30 minutes, with transfer rate of 2 to 3 MB/s.
11.2.2 Human-Interactive I/O Devices
The human-interactive devices can be further categorized as direct and indirect.
The direct devices are those that interact with people. These devices respond to
human action and display information in real-time at a rate that complements the
capabilities of people. The main job of these devices involves the translation of
data between human-readable to machine-readable forms and vice versa. The
direct I/O devices include the keyboard, mouse, trackball, screen, joystick, drawing
tablet, musical instrument interface, speaker and microphone.
Indirect devices do not interact with users. These device are used where
human beings are not directly involved in accepting the input or producing the
output such as a scanner or a printer. These devices also perform the data translation
in the format acceptable to machine. But they do not respond directly to a human
in real-time.
The human-interactive devices can further be classified into input and output types:
1. Input Devices
Input devices collect the information from the end user or from a device and convert
this information or data into a form, which can be understood by the computer. An
input device is characterized as good if it can provide useful data to the main
memory or the processor directly and timely for processing. Some common input
devices which allow to communicate with the computer are as follows:
(i) Keyboard
A keyboard is one of the most common input devices attached to all
computers. This input device may be found as part of an on-line/interactive
computer system used for entering the character. The layout of keyboard is
similar to the traditional typewriter of the type QWERTY as it is designed
basically for editing the data. The keyboards of a computer contain some Self-Instructional
Material 219
Input-Output Organization extra command keys and function keys. They contain a total of 101 to 104
keys. One can input the data by pressing the correct combination of keys
to input data.
(ii) Pointing Devices
NOTES
There are many pointing devices, such as light pen, joystick, mouse, etc.
(a) Mouse
Of all the pointing devices, the mouse is the most popular device used
with keyboard for accepting input. Its popularity is primarily due to
the fact that it provides very fast cursor movement providing the user
the freedom to work in any direction.
(b) Joystick
A joystick is specially used in systems that are designed for gaming
purposes. It is based on the principle of electricity, i.e. it is a resistive
device. It consists of a stick that turns the two shaft potentiometers,
one for X direction and the other for Y direction. The movement of
stick is just like the volume knob on a radio. Different positions of
potentiometer result in different voltage outputs. Using an analog-to-
digital converter (ADC), the output from the potentiometer’s resistance
at that particular position is converted into a corresponding number.
Thus, in case of joystick also, the distance covered will give a particular
output. This output of the ADC is then serialized and sent to the
computer for further processing in similar manner as in a keyboard.
(iii) Voice input systems
A system that enables a computer to recognize the human voice is called
the voice-input system. The two commonly used voice input systems are
microphone and voice recognition software.
(a) Microphone
The microphone turns acoustical pressure into a variation in voltage.
The digital value of this voltage is obtained by dividing the analog
signal at regular intervals (the sampling rate) and average integer value
of each sample is accepted as output. This digitized signal can be
used for recording, as in audio CD or can be converted into text by
processing it by voice recognition software.
(b) Voice recognition software
It is a complex software. To extract phonemes and whole words from
a voice message, you need a software that is a combination of both
signal processing and artificial intelligence techniques. Thus, a very
powerful machine and a dedicated signal processing computer to
implement it is required. But then also it may be limited to a single
person for which it is trained or if there are multiple speakers you have
to limit for just a small number of words and phrases.
Self-Instructional
220 Material
(iv) Source data automation (scanner) Input-Output Organization
mechanical process, such printers have very slow speed. The most
common printer based on this technology is Dot-matrix printer, which
can print typically 120 to 200 characters per second.
NOTES
(b) Non-impact printing
The non-impact printing technology prints characters and other images
on the paper, or any surface by using principles of electrostatic chemical,
heat, lasers, photography or ink-jets. Ink-jet printers and laser-jet
printers are prominent examples of non-impact printing.
Ink-jet printers
These printers spray tiny droplets of coloured inks on the paper.
The pattern of printing depends on how nozzle sprays the ink,
which has a quality to get dried within few seconds.
Laser-jet printers
The working of laser-jet printers is similar to photocopiers.
Nowadays, there is a tendency to design a device, which is hybrid
of photocopiers, scanners and printers. In laser-jet printers, there
is rotating drumon which the paper is rotated. Such printers use
a low-power laser that charges the paper on the drum with a
small electrical charge at the point where a black dot is required.
This paper is then passed over a toner tray. The toner tray contains
toner, a fine black powder, which is attracted to the paper
wherever it is charged.
(iv) Plotters
Plotters are used for printing the big charts, drawings, maps and three-
dimensional illustrations, specially used for architectural and designing
purposes.
An I/O interface is an entity that controls the data transfer from external device,
main memory and/ or CPU registers. You can say that it is an interface between
the computer and I/O devices (external devices) and is responsible for managing
the use of all devices that are peripheral to a computer system. It attempts to make
an efficient use of all available devices while retaining the integrity of data. Various
features of I/O interface are discussed as follows: Self-Instructional
Material 223
Input-Output Organization 11.3.1 Problems in I/O Device Management
Some of the major problems with the I/O device management are as follows:
There are various peripherals working on different principles. For example,
NOTES few of them work on electromechanical principle, few on electromagnetic
principle and few on optical principle, and so on. As each of them use
different methods of operation, it is impractical for the processor to
understand and interpret all. Thus, designing an instruction set that can convert
the signals into corresponding input value for all devices is not possible.
As a new I/O device is designed on some new technology, it is required to
make the device compatible with the processor. Designing an instruction
set for every new device is not at all feasible.
The rate of data transfer is usually much slower than the processor and
memory. Therefore, it is not logical to use the high-speed system bus that
communicates directly between I/O device and processor. A synchronization
mechanism is required for data transfer to be handled smoothly.
Peripheral devices accept input in variety of formats. Thus, they may use
different data formats and word lengths as used in processor and main
memory.
The operating mode of I/O devices is different for different devices. It must
be controlled so that it may not disturb the operation of other devices
connected to the processor.
To resolve these problems, there is a special hardware component between
CPU and peripheral to supervize and synchronize all input and output transfers.
Figure 11.1 illustrates the relationship between the CPU, the peripheral interface
chip and the peripheral device. Although the peripheral interface chip may appear
just like a memory location to the CPU, it contains specialized logic that allows it
to communicate with the external devices. There are a number of such I/O
controllers in a processor for controlling one or more peripheral devices.
Address bus
Periph eral
Data bus interface chip Peripheral
CPU
device
Control bus
Peripheral
CPU side bus
Fig. 11.1 Relationship between CPU, Peripheral Interface Chip and Peripheral Device
Self-Instructional
224 Material
11.3.2 Aims of I/O Module Input-Output Organization
Fig. 11.2 Connections between I/O Devices and Processor through I/O Bus
There are three types of buses, namely data bus, address bus and control
bus. Each device has an interface through which it is connected to a bus
(Figure 11.2). The interface decodes the signal received from the input device
in the format that the processor can understand, and also interprets the
control signal received from the processor for peripheral devices. It
Self-Instructional
Material 225
Input-Output Organization supervises and synchronizes the data flow between external device and
processor. Many devices also have a controller, which may or may not be
physically integrated on the interface chip. The controller is often used for
buffering the data, e.g. IDE is used as a disk controller.
NOTES
11.3.3 Functions of I/O Interface
The main functions of the interface are:
Contr ol and timing signals
Coordination in the flow of traffic between internal and external devices is
done by control and timing signals.
Processor communication
As a bus is usually employed for data transfer, each interaction between the
CPU and the I/O module involves bus arbitration. As the processor needs
to communicate with the external device, I/O module must perform the
following actions:
o Command decoding
I/O module accepts commands, sent as signals on the control bus,
from the processor.
o Data
Through data bus, the data is exchanged between the processor and
I/O module over the data bus.
o Status reporting
Different devices have different speeds. Few are very slow compared
to processor. Hence, it is required for I/O module to know the status
before the processor sends the data. Along with various error signals
used to verify the data sent, the common status signals used are BUSY
and READY.
o Address recognition
I/O module must recognize a unique address for each peripheral it
controls.
Device communication
I/O module has to communicate with device to fetch status information,
data transfer rate, etc.
Data buffering
Data comes from main memory in rapid burst and must be buffered by the
I/O module and then sent to the device at the latter’s rate.
Error detection
I/O module not only detects error but also reports these errors to the CPU.
Figure 11.3 shows the block diagram of an I/O interface.
Self-Instructional
226 Material
Interface to Interface to Input-Output Organization
System Bus External Device
External
Data
Data
Data Registers
Device Status
NOTES
Lines Interface
Status Control Registers Logic Control
Address Data
External
Lines I/O Device
Logic Interface Status
Controll Logic
Lines Control
All the operations in a digital system are synchronized by a clock that is generated
by a pulse generator. The CPU and I/O interface can be designed independently
or they can share common bus. If CPU and I/O interface share a common bus,
the transfer of data between two units is said to be synchronous. There are some
disadvantages of synchronous data transfer, such as:
It is not flexible as all bus devices run on the same clock rate.
Execution times are the multiples of clock cycles (if any operation needs
10.1 clock cycles, it will take 4 cycles).
Bus frequency has to be adapted to slower devices. Thus, one cannot take
full advantage of the faster ones.
It is particularly not suitable for an I/O system in which the devices are
comparatively much slower than processor.
In order to overcome all these problems, an asynchronous data transfer is
used for input/output system.
The word ‘asynchronous’ means ‘not in step with the elapse of time’. In
case of asynchronous data transfer, the CPU and I/O interface are independent of
each other. Each uses its own internal clock to control its registers. There are
many techniques used for such data transfer.
11.4.1 Strobe Control
In strobe control, a control signal, called strobe pulse, which is supplied from
one unit to other, indicates that data transfer has to take place. Thus, for each
data transfer, a strobe is activated either by source or destination unit (see Figure
11.4). A strobe is a single control line that informs the destination unit that a valid
data is available on the bus. The data bus carries the binary information from
source unit to destination unit.
Data transfer from source to destination
The steps involved in data transfer from source to destination are as follows:
(i) The source unit places data on the data bus.
(ii) A source activates the strobe after a brief delay in order to ensure that data
values are steadily placed on the data bus.
Self-Instructional
228 Material
(iii) The information on data bus and strobe signal remain active for some time Input-Output Organization
Data bus
Strobe
Data bus
Destination unit Strobe Source unit
Data bus
Strobe
Fig. 11.5 Destination-Initiated Strobe for Data Transfer
Data bus
Request
Source unit Destination unit
Reply
Data bus
Request
Reply
Data bus
Request
Reply
Fig. 11.7 Destination-Initiated Data Transfer Using the Handshaking Technique
Status
register
The keyboard (Figure 11.8) has a serial asynchronous transfer mode. In this
technique, the interactive terminal inserts special bits at both ends of the character
code. Thus, each character transmission has three types of bits: a start bit, the
Self-Instructional
Material 231
Input-Output Organization character bits and stop bits. Usually the transmitter rest at 1 state it happens when
no transmission is done. The first bit, which is 0, is first sent indicating that the
character transmission has begun. The last bit is always 1 (Figure 11.9).
NOTES 1 1 0 0 0 1 0 1
Start Stop
Character bits
bit bits
Fig. 11.9 Format of Asynchronous Serial Data Transfer
Let us summarize the steps taken to write a block of memory to an output port
such that one byte is transferred at a time.
(i) Firstly, we have to initialize memory as well as the output port addresses.
(ii) The following steps are repeated until all bytes are transferred:
(a) Read one byte from memory.
(b) Write that byte to output port.
(c) Increment memory address so that next byte can be transferred during
the next clock pulse.
(d) Verify if all bytes are transferred:
If yes, go to the end of step 2.
Else, wait until output port is ready for transferring the next byte. Go
to step (ii)a
Using this approach, we can transfer the data with a speed which is much less than
the maximum rate at which they can be read from the memory. Practically, there
are various transfer modes through which the data transfer between computer and
I/O device takes place with a much faster rate. These modes are as follows:
1. Programmed I/O
2. Interrupt-initiated I/O
Self-Instructional
3. Direct memory access (DMA)
232 Material
4. Dedicated processor, such as input–output processor (IOP) Input-Output Organization
Ready Ready
No No
Done? Done?
Yes Yes
Next instruction Next instruction
(a) Programmed I/O (b) Interrupt-driven I/O
Fig. 11.10 Flow Chart of Programmed I/O, Interrupt-driven I/O and DMA
Modes of Data Transfer
1. Programmed I/O
Programmed I/O operations are the results of I/O operations that are written in
the computer program. Each data transfer is controlled by an instruction set stored
in the program. When the processor has to perform any input or output instruction,
it issues a command for the appropriate I/O module that executes the given
instruction as shown in Figure 11.10(a). Processor has to continuously monitor
the status of I/O device to see whether it is ready for data transfer. Once it is
ready, I/O module performs the requested action and then setting the appropriate
bits in the I/O status register alerts the processor for further action.
2. Interrupt-initiated I/O
In programmed I/O, the processor has to check continuously till the device becomes
ready for transferring the data. It uses the interrupt facility and issues a command
that requests the interface to issue an interrupt when the device is ready for data
transfer. Here the interrupt is generated only when device is ready, and hence, till
device becomes ready, the processor can execute another program instead of
checking the device as it has to do in programmed I/O. Once processor receives
an interrupt signal [Figure 11.10(b)], it stops the current processing task and starts
I/O processing. After the completion of I/O task, it returns back to original task.
Self-Instructional
Material 233
Input-Output Organization 3. Direct Memory Access (DMA)
In direct memory access, the interface transfers the data directly to memory unit
via memory bus. The processor just initiates the data transfer by sending the starting
NOTES address and the number of bits to be transferred and proceeds with the pervious
task. When the request is granted by the memory controller, the DMA transfers
the data directly into memory [Figure 11.10(c)]. It is the fastest mode of data
transfer.
4. Input–output processor (IOP)
IOP is a special dedicated processor that combines interface unit and DMA as
one unit. It can handle many peripherals through DMA and interrupt facility.
5. Data Communication Processor (DCP)
DCP is also a special-purpose dedicated processor that is designed specially for
data transfer in network.
1. Direct I/O devices are those devices that interact with people. They include
the keyboard, mouse, trackball, screen, joystick, drawing tablet, musical
instrument interface, speaker and microphone. Indirect I/O devices, on the
other hand, do not interact with users and are used where humans are not
directly involved in accepting the input or producing the output, such as a
scanner or a printer.
2. A system that enables a computer to recognize the human voice is called
the voice-input system.
3. The common optical scanner devices are magnetic ink character recognition
(MICR), optical mark reader (OMR) and Optical Character Reader
(OCR).
4. An I/O interface is an entity that controls the data transfer from external
device, main memory and/ or CPU registers. We can say that it is an interface
between a computer and I/O devices (external devices) and is responsible
for managing the use of all devices that are peripheral to a computer system.
Self-Instructional
234 Material
5. There are four types of commands an I/O interface may receive: Input-Output Organization
11.7 SUMMARY
Self-Instructional
Material 235
Input-Output Organization
11.8 KEY WORDS
Peripheral devices: They are transducers that can sense physical effects
NOTES and convert them into machine-tractable data.
Strobe: It is single control line that informs the destination unit that a valid
data is available on the bus.
Self-Instructional
236 Material
Priority Interrupt
12.0 INTRODUCTION
In this unit, you will learn in detail about the various modes of data transfer, priority.
You have already studied that data transfer is the process of using computing
techniques and technologies to transmit or transfer electronic or analog data from
one computer node to another. There are various modes of data transfer such as
interrupt-initiated I/O, direct memory access (DMA) etc. The interrupt-driven I/
O data transfer technique is based on the on-demand processing concept. In this,
each I/O device generates an interrupt only when an I/O event has to take place.
In DMA, the data is moved between a peripheral device and the main memory
without any direct intervention of the processor. Although DMA requires a relatively
large amount of hardware and is complex to implement, it is the fastest possible
means of transferring the data between peripheral device and memory.
12.1 OBJECTIVES
Self-Instructional
Material 237
Priority Interrupt
12.2 PRIORITY INTERRUPT
In the interrupt-driven I/O techniques, the processor starts data transfer when it
NOTES detects an interrupt signal which is issued when the device is ready. This helps the
processor to run a program concurrently with the I/O operations.
The interrupt-driven I/O data transfer technique is based on the on-demand
processing concept. In this, each I/O device generates an interrupt only when an
I/O event has to take place. This is like the action that has to be taken if the user
presses a key on the keyboard. The transfer is done by the service routine that
processes the required data. The interrupt handler transfers the control to this
routine. After the I/O interrupt is serviced, the processor returns the control to the
program which had been interrupted and is waiting to be executed.
Its main advantages are as follows:
The processor does not have to wait for long for I/O modules.
The processor does not have to repeatedly check the I/O module status.
Types of exceptions
Interrupts are nothing but just a type of exception. As far as software is considered,
there are the three following types of exceptions:
(i) Interrupts: These are raised by hardware at anytime (asynchronous).
(ii) Traps: These are raised as a result of the execution of the program, such as
division by zero. As the traps are reproduced at the same spot if the program
parameters are the same as before, they are considered to be synchronous.
(iii) System calls: Also called software interrupts, these are raised by the
operating system to provide services for performing certain common I/O
tasks, such as printing a character, opening a file, etc.
Figure 12.1 illustrates the organization of a system with a simple interrupt-
driven I/O mechanism. In most microprocessors, during I/O operation interrupt
request, IRQ, is asserted by a peripheral device requesting attention. This request
may or may not be granted.
Address bus
Data bus
Port
CPU Data register Interrupt registers
Memory Status register are read by the CPU
IVR to determine the
peripheral’s status
IRO IRO
Informs CPU that the I want attention
peripheral wants attention
Interrupt request to CPU
Self-Instructional Fig. 12.1 A Simple Interrupt-Driven I/O CPU
238 Material
Let us study the sequence of software and hardware events that occur when an Priority Interrupt
interrupt triggers and how the system handles them. The sequence is as follows:
(i) If a program requires any input or output, it lets the device controller or
device to issue an interrupt.
NOTES
(ii) User programs interact with the I/O devices through the Operating System
(OS). OS has a special region of memory reserved for it that is inaccessible
by user programs, called the kernel space. The processor is placed in the
kernel mode. It finishes the program, currently under execution, before
responding the interrupt.
(iii) The processor determines what type of interrupt it is and sends a signal of
acknowledgement to the device that issued it.
(iv) Once the interrupt signal is acknowledged, the device is allowed to remove
its interrupt signal.
(v) The processor saves the information required to continue the currently
executing program once the interrupt is over. Thus, it saves the status of the
program stored in the Program Status Word (PSW) and the location of the
next instruction as stored in the instruction counter in stack and also the
content of all registers in stacks.
(vi) Now the address of subroutine, which contains the code that does the actual
handling of that particular interrupt in the program counter, is loaded. It
does this through an interrupt vector table, which points to the interrupt-
handling routines (which store the address of the interrupt handling
subroutine). It depends on the operating system and it can be one-to-one
mapping or many interrupt-handling routines for a given interrupt. In such a
case, the processor decides which interrupt handler is to be invoked.
(vii) Then, interrupts are disabled to avoid an interrupt being interrupted.
(viii) The interrupt-handler now processes the interrupt by checking the status
information relating to the I/O operation or other event that caused interrupt.
(ix) After the interrupt processing is done, the saved register values are retrieved
from stack and restored in registers.
(x) Finally, interrupted PSW and PC of program are popped from stack and
the next instruction of the previously interrupted program is executed.
Figure 12.2 illustrates the interrupt handling in an I/O system. PSW consists of the
condition code register and reference to the code that is used by the operating
system and interrupts the processing mechanism.
Self-Instructional
Material 239
Priority Interrupt Stack before
interrupt
Normal processing Interrupt handling Stack
Interrupt
Stack processo
NOTES r
and return addr status
ess SP
TOS
Save working
registers
Stack after
Interrupt handling
routine interrupt
Stack
Restore PC and Restore working
registers SP
processor status
Return
Status
Old TOS
CPU CPU
Controller Controller
Device Device
Thus, major part of CPU overhead is the time the CPU spends in reading
operation, as shown in Figure 12.3. It is allowed to use system bus when the
processor does not need it or to temporarily force the processor to suspend
operations. This suspension of the process is called cycle stealing.
AB Address bus
Bus request BR
DB Data bus High impedance
CPU (disabled)
Bus granted BG RD Read
if BG = 1
WR Write
Self-Instructional
242 Material
To initiate a DMA transfer, the host writes a DMA command block. The Priority Interrupt
block contains a pointer to the source and destination of the transfer and the
number of the bytes to be transferred (Figure 12.4). The address of this command
block is written to the DMA controller by the CPU. Once the CPU requests, the
‘request’ bit will be set for that specific block. After DMA controller detects a NOTES
request, it starts data transfers, which gives the CPU an opportunity to perform
other tasks. Once the DMA reads all the data, only one interrupt is generated per
block and the CPU is notified about the data is available at the buffer.
On comparing DMA with programmed I/O we find that overhead is
negligible. As CPU is no longer responsible for setting up the device, checking if
the device is ready after the read operation and processing the read operation
itself, we have 0 overhead. By using DMA, the bottleneck of the read operation
will no longer be the CPU. Now the bottleneck is transferred to the PCI BUS.
Decrease in overhead results in a much higher throughput, approximately 3–5
times higher than the programmed I/O.
There are three possible ways of organizing DMA module using detached bus or
integrated bus or separate I/O bus. These ways are as follows:
(i) Single bus: Detached DMA module
Each transfer uses bus twice, one from I/O to DMA and the other from
DMA to memory.
Processor is suspended twice.
(ii) Single bus: Integrated DMA module
Module may support more than one device.
Each transfer uses bus only once, from DMA to memory.
Processor is suspended once.
(iii) Separate I/O bus
Bus supports all DMA enabled devices.
Each transfer uses bus only once, from DMA to memory
12.3.1 DMA Controller
DMA requires additional hardware, called Direct Memory Access Controller
(DMAC). This is used to mimic the processor by taking control over the CPU
and allowing the transfer of information without involving the CPU. The idea is
simply that instead of interrupting the CPU with every byte of data transferred, use
a separate processor called DMA controller. This interrupts the CPU only when
the transfer of the block is complete. This is indeed more efficient than interrupting
CPU for every byte transferred. For example, for disk read operation, the controller
is provided with the address of the block of data on the disk, the destination
address in memory and the size of the data. The DMA controller is then
commanded to proceed. It is an interface chip, just like a specialized
Self-Instructional
Material 243
Priority Interrupt microprocessor, which controls the data transfer between the memory and the
peripheral device. DMAC knows how this transaction should take place. Hence,
no memory fetching is done during transfer as all instructions are available for data
transfer.
NOTES
The various functions of DMAC are as follows:
To provide addresses for the source or destination of data in memory
To inform the peripheral that data is needed or is ready.
It grabs the computer’s internal data and address buses during data transfer.
Hence, before the DMA starts data transfer, the CPU first sets up the DMAC’s
registers to tell about the follows:
Whether it is read operation or write operation
I/O device address using data lines
Starting memory address using data lines (stored in address register)
Number of words to be transferred, using data lines (stored in data register)
The direction of data transfer; whether it is from device to processor or
vice versa
Thus, once the DMAC has control of the bus, it generates all timing signals that
are required for transferring the data between peripheral and memory. A real DMA
controller is a very complex device. Its configuration and interaction with processor
is shown in Figure 12.5. The various signals that the processor gives and has an
output from DMA are also shown. It has several internal registers, with at least
one to hold the address of the next memory location to access one to hold the
number of words to be transferred, shown as word count a control register and
data bus buffers. For each word transfer, the DMA increments its address registers
and decrements its word count register. DMA continues to check the request line
till word count does not become zero.
Address bus
Figure 12.6. The CPU communicates with DMA through address and data bus as
with any interface unit. DMA has its own bus architecture. How these buses are
connected to CPU, I/O device and memory unit is also given in Figure 12.6.
Here, when BG=0, the CPU communicates with internal registers of DMAC NOTES
through RD and WR input lines, and when BG=1, RD and WR are output lines
that transfer data from DMAC to RAM, specifying read or write operation.
Data bus
Enable CPU
Bus switch 1 Bus switch 2 Bus switch 3
Enable Enable
DMA DMA
Transfer Grant
The DMA module can transfer the entire block of data at a time, directly to
or from memory, without going through the CPU. The CPU then continues with
other work. It delegates this I/O operation to the DMA module, and that module
will take care of it. When the transfer is complete, the DMA modules send an
interrupt signal to the CPU. Thus, the CPU is involved only at the beginning and
end of the transfer.
The DMA module needs to take control of the bus in order to transfer data
to and from memory. For this purpose, the DMA module must use the bus only
when the CPU does not need it, or it must force the CPU to temporarily suspend
operation. The latter technique is more common and is referred as cycle stealing
since the DMA module effectively steals a bus cycle.
Figure 12.7 shows where in the instruction cycle the CPU may be suspended.
In each case, the CPU is suspended just before it needs to use the bus. This is not
an interrupt; the CPU does not save a context and does something else. Rather,
the CPU pauses for one bus cycle. The overall effect is to cause the CPU to
execute more slowly. Nevertheless, for a multiple-word I/O transfer, DMA is far
more efficient than interrupt- driven or programmed I/O.
The sequence of events that take place in the form of a series of transactions
between the peripherals, DMAC and the CPU are as follows:
The processor is suspended once.
The processor then continues with other work.
Self-Instructional
246 Material
DMA module transfers the entire block of data – one word at a time – Priority Interrupt
Transfer data
Till now you have studied the various modes for data transfer which involve the
CPU. As the I/O processor (IOP) is slow and wastes maximum of processor’s
time you can deploy one or more external processors and assign them the task of
communicating directly with I/O devices without any intervention of CPU. An
input/output processor (IOP) may be classified as a processor with the direct
memory access capability that communicates with I/O device. As shown in Figure
12.9, such a processor has one memory unit and number of processor which
include CPU and one or more IOPs. IOP’s responsibility is to handle all input/
output related operations and relieve the CPU for other operations. The processor
that communicates with remote terminals like telephone or any other serial
communication media in serial fashion is called data communication processor
(DCP).
Self-Instructional
Material 247
Priority Interrupt
Memory unit
Peripheral devices
Memory bus
NOTES
PD PD PD PD
Figure 12.9 shows the block diagram of computer having an IOP. An IOP is just
like a CPU. It can fetch and execute its own instruction. It is designed to handle all
details of I/O processing. IOP can perform other processing tasks, such as
arithmetic, logic branching and code translations. It provides the path for data
transfer between various peripheral devices and memory unit. The CPU assigns
the task of initiating the I/O operation by testing the status of IOP. If the status is
fine, the processor continues its other works and IOP handles the I/O operation.
After the input is completed, IOP transfers its content to memory by stealing one
memory cycle from CPU. Similarly, an output is directly transferred from memory
to IOP, stealing a memory cycle and from IOP to the output device at a rate the
device accepts the output (Figure 12.10).
CPU
(1) (4) Issues instruction to IOP
IOP
(2) Interrupts when done
(3) Memory
Device to/from memory
transfers are controlled
by the IOP directly and IOP also steals memory cycles
Fig. 12.10 Data Transfer between IOP and CPU
Instructions that are used for reading from memory by an IOP are called commands
(instructions words are used as CPU instructions). The CPU informs IOP where
the command is in memory and when it is to be executed (Figure 12.11).
target device
where commands are
OP Device Address
Looks in memory for commands
The command word constitutes the program for the IOP. It informs IOP what to
do, where to store data in memory, how much data transfer has taken place and
any other special request (Figure 12.12).
Self-Instructional
248 Material
Priority Interrupt
OP Addr Cnt Other
what special
to do requests NOTES
where how
to put much
data
Fig. 12.12 IOP Instruction
In most computers, a CPU acts as a master and IOP as slave. The I/O operations
are started by CPU but are executed by IOP. CPU gives the start command to
start the I/O operation after testing the status. The status words indicate the
conditions of the IOP and I/O devices, such as overload condition, device busy or
device ready status, etc. Once it finds that the status bit is OK, the CPUs send the
instruction to IOP to start the I/O transfer. The memory address received from the
instruction tells the IOP where to find the program. The CPU continues with another
program, while IOP is busy with the I/O program. Both programs refer to memory
by means of DMA transfer. The IOP interacts with CPU by means of interrupt.
Also, for ending the instruction IOP, an interrupt is sent to CPU. The CPU responds
to the interrupt by checking the IOP status to find whether the complete transfer
operation takes place with or without error.
Figure 12.13 illustrates the communication between CPU and IOP.
CPU operations
Send instruction IOP operations
to test IOP path Transfer status word
to memory location
If status O.K,
send start I/O
instruction to IOP Access memory for
IOP program
Continue
Computer
Root Hub
Monitor Printer
Hub Hub
Keyboard
Hub Scanner
Hub
Mouse Joystick
Self-Instructional
250 Material
One common example of DCP is modem. It is used for establishing connection Priority Interrupt
between the computer and telephone line. As telephone lines are designed for
analog signal transfer, a modem should convert the audio signal of telephone line
to digital format for computer use and also convert the digital signal to audio signal
that is to be transmitted through communication line. NOTES
The transmission can be synchronous or asynchronous depending upon the
transmission mode of the remote terminal. The synchronous transmission does not
use start and stop bits. This is commonly used in high-speed devices to realize full
efficiency of communication link. The synchronous message is sent as a continuous
message for maintaining a synchronism. In modems, internal clocks are set to the
frequency of communication line. In this case the receiver clock has to be maintained
continuously for adjusting any frequency shift. In asynchronous transmission, on
the other hand, each character can be used separately with own start and stop bit.
The message is sent as group of bits as blocks of data. The entire blocks is
transmitted with a special control characters at the beginning and end of the block
as shown in Figure 12.15. SYNC is used for synchronous data, PID is process
ID, followed by message (packet), CRC code and EOP indicating end of block.
One function of the data communication processor is to check the transmission
errors. CRC cyclic redundancy checks a polynomial code algorithm that is used
to check the error the occurs during transmission.
1. Polling is the technique that identifies the highest priority resource by means
of software.
2. The parallel priority interrupt method uses a register whose bits are set
separately by an interrupt signal for each device. Priority is assigned
according to the bit value in the interrupt register. A mask register is used
whose purpose is to control the status of each interrupt request. It disables
a lower priority interrupt while a higher priority device is being serviced.
3. Direct memory access (DMA) is an important data transfer technique. In
DMA, the data is moved between a peripheral device and the main memory Self-Instructional
Material 251
Priority Interrupt without any direct intervention of the processor. The DMA technique is
particularly useful for transferring large amount of data (e.g. images, disk
transfer, etc.) to memory.
4. The data communication processor is an IOP that distributes and collects
NOTES
data from the remote terminals through telephone or other connection lines.
It is a specialized I/O processor designed to communicate directly with
data communication network.
12.7 SUMMARY
Self-Instructional
252 Material
Direct memory access (DMA): It is a data transfer technique in which Priority Interrupt
the data is moved between a peripheral device and the main memory without
any direct intervention of the processor.
Input/output processor (IOP): It is a processor with the direct memory
NOTES
access capability that communicates with I/O device.
Self-Instructional
Material 253
Memory
BLOCK V
MEMORY ORGANIZATION
NOTES
UNIT 13 MEMORY
Structure
13.0 Introduction
13.1 Objectives
13.2 Memory Hierarchy
13.3 Main Memory
13.3.1 RAM
13.3.2 ROM
13.4 Auxiliary Memory
13.5 Associative Memory
13.6 Answer to Check Your Progress Questions
13.7 Summary
13.8 Key Words
13.9 Self Assessment Questions and Exercises
13.10 Further Readings
13.0 INTRODUCTION
In this unit, you will learn about the various types of memory and their hierarchy.
The computer memory is an essential part of a computer system. Memory can be
divided into two types, primary memory and secondary memory. The main memory
communicates directly with the CPU. The secondary memory communicates with
the main memory through the I/O processor. The main memory is of two types—
RAM and ROM. You will also learn about the purpose of the different auxiliary
memories used in a computer system and the concept of associative memory.
13.1 OBJECTIVES
The memory hierarchy consists of the total memory system of any computer. The
memory components range from higher capacity slow auxiliary memory to a
Self-Instructional
254 Material
relatively fast main memory to cache memory that can be accessible to the high Memory
Main Memory
Magnetic Disk
The main memory is at the central place as it can communicate directly with
the CPU and through the Input/Output or I/O processor with the auxiliary devices.
Cache memory is placed in between the CPU and the main memory.
Cache usually stores the program segments currently being executed in the
CPU and temporary data frequently asked by the CPU in the present calculations.
The I/O processor manages the data transfer between the auxiliary memory and
the main memory. The auxiliary memory has usually a large storing capacity but
has low access rate as compared to the main memory and hence, is relatively
Self-Instructional
Material 255
Memory inexpensive. Cache is very small but has very high access speed and is relatively
expensive. Thus, we can say that
Access speed Cost
NOTES
13.3 MAIN MEMORY
The memory unit that communicates directly with the CPU is called main memory.
It is relatively large and fast and is basically used to store programs and data
during computer operation. The main memory can be classified into the following
two categories:
13.3.1 RAM
The term, Random Access Memory (RAM), is basically applied to the memory
system that is easily read from and written to by the processor. For a memory to
be random access means that any address can be accessed at any time, i.e., any
memory, location can be accessed in a random manner without going through any
other memory location. The access search time for each memory location is same.
The two main classifications of RAM are Static RAM (SRAM) and Dynamic
RAM (DRAM).
Static RAM or SRAM
Static RAM is made from an array of flip-flops where each flip-flop maintains a
single bit of data within a single memory address or location.
SRAM is a type of RAM that holds its data without external refresh as long
as power is supplied to the circuit. The word ‘static’ indicates that the memory
retains its content as long as power is applied to the circuit.
Dynamic RAM or DRAM
Dynamic RAM is a type of RAM that only holds its data if it is continuously
accessed by special logic called refresh circuit. This circuitry reads the contents of
each memory cell many hunderds of times per second to find out whether the
memory cell is being used at that time by computer or not. Due to the way in
which the memory cells are constructed, the reading action itself refreshes the
contents of the memory. If this is not done regularly, then DRAM will lose its
contents even if it continues to have power supplied to it. Because of this refreshing
action, the memory is called dynamic.
13.3.2 ROM
In every computer system, there is a portion of memory that is stable and impervious
to power loss. This type of memory is called Read Only Memory or in short
ROM. It is non-volatile memory, i.e., information stored in it is not lost even if the
power supply goes off. It is used for permanent storage of information and it
possesses random access property.
Self-Instructional
256 Material
The most common application of ROM is to store the computer’s Basic Memory
Input-Output System (BIOS). Since the BIOS is the code that tells the processors
to access its resources on powering up the system. Another application is the
code for embedded systems.
NOTES
There are different types of ROMs. They are as follows:
PROM or Programmable Read Only Memory: Data is written into a
ROM at the time of manufacture. However, the contents can be programmed
by a user with a special PROM programmer. PROM provides flexible and
economical storage for fixed programs and data.
EPROM or Erasable Programmable Read Only Memory: This allows
the programmer to erase the contents of the ROM and reprogram it. The
contents of EPROM cells can be erased using ultra violet light using an
EPROM programmer. This type of ROM provides more flexibility than
ROM during the development of digital systems. Since they are able to
retain the stored information for longer duration, any change can be easily
made.
EEPROM or Electrically Erasable Programmable Read Only
Memory: In this type of ROM, the contents of the cell can be erased
electrically by applying a high voltage. EEPROM need not be removed
physically for reprogramming.
Surface 2
Read/write
head
Surface 3
Surface 4
Cylinder
Surface 2n
The subdivision of a disk surface into tracks and sectors is shown in Figure 13.3.
Sector
Tracks
Read/Write
head
Suppose s bytes are stored per sector, there are p sectors per track, t
tracks per surface and m surfaces. Then, the capacity of disk will be defined as
Capacity = m × t × p × s bytes
If d is the diameter of the disk, the density of recording is
p s
Density = = bytes/inch
Self-Instructional
d
258 Material
A set of disk drives are connected to a disk controller. The disk controller Memory
accepts commands and positions the read/write heads for reading or writing. When
the read/write command is received by the disk controller, the controller first
positions the arm so that the read/write head reaches the appropriate cylinder.
The time taken to reach the appropriate cylinder is known as Seek time (Ts). The NOTES
maximum seek time is the time taken by the head to reach the innermost cylinder
from the outermost cylinder or vice versa. The minimum seek time will be 0 if the
head is already positioned on the appropriate cylinder. Once the head is positioned
on the cylinder, there is further delay because the read/write head has to be
positioned on the appropriate sector. This is rotational delay also known as Latency
time (Tl). The average rotational delay equals half the time taken by the disk to
complete one rotation.
Floppy Disk
A floppy disk, also known as diskette, is a very convenient bulk storage device
and can be taken out of the computer. It can be either 5.25" or 3.5" size, the 3.5"
size being more common. It is contained in a rigid plastic case. The read/write
heads of the disk drive can write or read information from both sides of the disk.
The storage of data is in the magnetic form, similar to that in hard disk. The 3.5"
floppy disk has storage up to 1.44 Mbytes. It has a hole in the centre for mounting
it on the drive. Data on the floppy disk is organized during the formatting process.
The disk is organized into sectors and tracks. The 3.5" high-density disk has 80
concentric circles called tracks and each track is divided into 18 sectors. Tracks
and circles exist on both sides of the disk. Each sector can hold 512 bytes of data
plus other information like address, etc. It is a cheap read/write bulk storage device.
Magnetic Tapes
Magnetic disk is used by almost all computer system as a permanent storage
device; however, magnetic tape is still a popular form of low-cost magnetic storage
media and it is primarily used for backup storage purposes. The standard backup
magnetic tape device used today is Digital Audio Tape (DAT). These tapes provide
approximately 1.2 Gbytes of storage on a standard cartridge-size cassette tape.
These magnetic tape memories are similar to that of audio tape recorders.
A magnetic tape drive consists of two spools on which the tape is wounded.
Between the two spools, there is a set of nine magnetic heads to write and read
information on the tape. The nine heads operate independently and record
information on nine parallel tracks, parallel to the edge of the tape. Eight tracks are
used to record a byte of data and the ninth track is used to record a parity bit for
each byte. The standard width of the tape is half an inch. The number of bits per
inch (bpi) is known as recording density.
Normally, when data is recorded into the tape, a block of data is recorded
and then a gap is left and then another block is recorded and so on. This gap is
known as Inter-Block Gap (IBG). The blocks are normally 10 times long as that
Self-Instructional
Material 259
Memory of IBG. The beginning of the tape (BOT) is indicated by a metal foil known as
marker and the End Of Tape (EOT) is also indicated by a metal foil known as end
of tape marker.
The data on the tape is arranged as blocks and cannot be addressed. They
NOTES
can only be retrieved sequentially in the same order in which they are written.
Thus, if a desired record is at the end of the tape, earlier records have to be read
before it is reached and hence, the access time is very high as compared to magnetic
disks.
Optical Disks
Optical disk storage technology provides the advantage of high volume and
economical storage with somewhat slower access times than traditional magnetic
disk storage.
CD-ROM
Compact Disk-Read Only Memory (CD-ROM) optical drives are used for the
storage of information that is distributed for read-only use. A single CD-ROM can
hold up to 800 MB of information. Software and large reports distributed to a
large number of users are good candidates for this media. CD-ROM is also more
reliable for distribution than floppy disks or tapes. Nowadays, almost all software
and documentations are distributed only on CD-ROM.
In CD-ROMs the information is stored evenly across the disk in segments
of the same size. Therefore, in CD-ROMs, data stored on a track increases as we
go towards the outer surface of disk and hence, CD-ROMs are rotated at variable
speeds for the reading process.
Erasable Optical Disk
Recent development in optical disks is the erasable optical disks. They are used
as an alternative to standard magnetic disks when speed of the access is not
important and the volume of the data stored is large. They can be used for image,
multimedia, a high volume, low activity backup storage. Data in these disks can be
changed as repeatedly as in a magnetic disk. The erasable optical disks are portable
and highly reliable and have longer life. They use format that makes semi-random
access feasible.
Check Your Progress
1. Where is cache memory located in the memory hierarchy?
2. Write the function of I/O processor.
3. Write the purpose of RAM.
4. What is dynamic RAM?
Self-Instructional
260 Material
Memory
13.5 ASSOCIATIVE MEMORY
DATA
MASK TAGS
WORD n – 1
WORD 1
WORD 0
Two registers are used with CAM — a MASK register, also called key register,
and a data register, also called argument register. The size of register is same as
one word stored in associative memory. In addition, each word has a circuit to
perform the comparison operation. Corresponding to each word, one or more tag
bits are associated. Each set of tag bits forms a bit-slice register which has same
size as that of number of words in the CAM. Self-Instructional
Material 261
Memory
Argument Register (A)
Match Register
Input
Associative Memory
Array and Logic
Read M
m Words
n Bits per Word
Write
Output
A1 Aj An
K1 Kj Kn
Here each word is matched in parallel with the content of the argument register.
The bit corresponding to each word of memory in M holds the match status.
Once matching process is done, those bits in match register will be set corresponding
to which word in associative memory has been matched. We can use some portion
of argument register by using a mask in key register. The entire argument is compared
with each memory word if the key register contains all 1’s. Else only those bits in
Self-Instructional
the argument that have 1's the corresponding bit in key register are 1. Thus, the
262 Material
key provides mask or identifying piece of information which specifies how memory Memory
reference is to be made.
Let as consider an example where the argument register A and the key register
have bit configuration as:
NOTES
A 10101010
B 00001111
Now these two registers set the search pattern to 1010 in last four bits. For all
words that contain the pattern 1010 in last four bits will set the match bit. Lets us
consider the following three words and try to find their match status for the above
pattern.
Word1 10101111 no match
Word2 11111010 match
Word3 10101011 no match
Here only word 2 sets match status. Thus, in this case when the CAM performs a
search, the logic associated with pattern, i.e., the presence of 1010 pattern in the
last four bit (selected pattern) will be compared with each word. The tag bit for
the corresponding word will be set to one if the match is found. Thus, tag bit for
word 2 will be set. At the end of this process, all matching words may be identified
by their tag bits. In case of system that supports more complex operations Often
more than one tag bit is used.
Aj Kj
Input
Write
R S
Fij Match
Read To Mi
Logic
Output
Virtual Address
31 1211 0
NOTES
Physical 31 12
Page Address 0
1
2
3
4
4095
Associative Memory
Miss Hit
Translation
Data
1
1 Physical Memory
1
1
0
1
1
1
1 Disk Storage
1
0
1
1
0
1
1
0
1
1. Cache memory is placed between the CPU and the main memory in memory
hierarchy.
2. The function of I/O processor is to manage the data transfer between the
auxiliary memory and the main memory.
Self-Instructional
266 Material
3. The purpose of RAM is to store data and applications that are currently in Memory
13.7 SUMMARY
The memory hierarchy consists of the total memory system of any computer.
The memory components range from higher capacity slow auxiliary memory
to a relatively fast main memory to cache memory that can be accessible to
the high speed processing logic.
The memory unit that communicates directly with the CPU is called main
memory. It is relatively large and fast and is basically used to store programs
and data during computer operation.
The two main classifications of RAM are Static RAM (SRAM) and Dynamic
RAM (DRAM).
Dynamic RAM is a type of RAM that only holds its data if it is continuously
accessed by special logic called refresh circuit.
In every computer system, there is a portion of memory that is stable and
impervious to power loss. This type of memory is called Read Only Memory
or in short ROM. It is non-volatile memory, i.e. information stored in it is
not lost even if the power supply goes off.
Secondary storage, also known as external memory or auxiliary storage,
differs from primary storage in that it is not directly accessible by the CPU.
An associative memory, also called content-addressable memory (CAM),
is a very high speed memory that provides a parallel search capability. It is
capable of searching the contents of all its locations at any instant of time.
Main memory: Communicates directly with the CPU and with the auxiliary
devices through the I/O processor.
RAM: Main memory of a computer system.
DRAM: A type of RAM that only holds its data if it is continuously accessed
by special logic called refresh circuit.
Self-Instructional
Material 267
Memory
13.9 SELF ASSESSMENT QUESTIONS AND
EXERCISES
Self-Instructional
268 Material
Memory Organization
14.0 INTRODUCTION
In this unit, you will learn about the cache memory and virtual memory. Cache
memory is defined as a very high speed memory that is used in a computer system
to compensate the speed differential between the main memory access time and
the processor logic. A very high speed memory called cache is used to increase
the speed of processing by making the current programs and data available to the
CPU at a rapid rate. It is placed between the CPU and the main memory. The
virtual memory is a concept that permits the user to construct a program with size
more than the total memory space available to it. This technique allows user to use
the hard disk as if it is a part of main memory. You will also learn about the memory
management hardware.
14.1 OBJECTIVES
The cache is a small, fast memory placed between the CPU and the main memory.
The system performance can improve dramatically by using cache memory at a
relatively lower cost. The word cache is derived from the French word that means
hidden. It is named so because the cache memory is hidden from the programmer
and appears as if it is a part of the system’s memory space. It improves the speed
because of its very high-speed and rapidly been accessed by the processor, with
Self-Instructional
Material 269
Memory Organization a fetch cycle time comparable to speed of CPU. The whole concept of using
cache memory is based on the principle of hierarchy and locality of reference.
This results in an overall increase in the speed of the system. In a system that uses
a tiny 512 MB cache memory and RAM of 2 GB, it is observed that the processor
NOTES accesses to the cache 95 per cent more than RAM. The initial microprocessors
had truly tiny cache memories; for example, 32 bytes. But in the early 1990s, the
cache sizes of 8 KB to 32 KB became common. By the end of the 1990s, multilevel
cache configuration became common.The multilevel chip has one cache of capacity
up to 128 KB internal on the chip and other is external to chip and form second
level caches having capacity up to 1 MB.
In Figure 14.1, it can be seen that the cache memory is attached to both the
processor as well as main memory in parallel via address and data buses. This is
done so that data consistency is maintained in both cache and the main memories.
Data Data Bus
CPU
Typically
The address form the Hit 64M - 4 Gbytes
Cache Cache
CPU Interrogates both
Controller Memory
the cache and main
memory.
Typically
64K to 512 Mbytes
If the data is in the cache, it is fetched
from there rather than the main store.
As you know, all data is stored in the hard disk and the program that is under the
execution resides in the main memory. The virtual memory is a concept that permits
the user to construct a program with size more than the total memory space available
Self-Instructional to it. This technique allows user to use the hard disk as if it is a part of main memory.
270 Material
Hence with this technique, a program with size even larger than the actual physical Memory Organization
memory available can execute. Here the only thing required is an address mapping
from virtual address to physical address in main memory. An address generated by
CPU during execution of program is called virtual address and the set of such
addresses is address space. An address in the main memory is called physical address NOTES
and the set of these addresses is called memory space. A virtual memory system
provides a mechanism for translating a program generated address by the processor
into main memory location. A program uses the virtual memory addresses space
which stores data and instruction. In usual case, the address space is more than
memory space, where actually manipulation has to be done. If there is a main memory
of capacity of 32K words, 15 bits are required to specify the physical address of
memory. Let the system have auxiliary memory of 1MB size and it will require 20
bits (i.e., address bit) to access the data. As said earlier, in the virtual memory system,
a mapping from a virtual address space to a physical address space is required.
System uses a table that maps a virtual address of 20 bit to physical address of 15
bits. This process is required for translation of every word (Figure 14.2).
1. The cache is used for storing program segments currently being executed in
the CPU and for the data frequently used in the present calculations.
2. The virtual memory is a concept that permits the user to construct a program
with size more than the total memory space available to it. This technique
allows user to use the hard disk as if it is a part of main memory.
3. If we want to access a block in cache and the block is present there, we call
it a hit, else a miss.
14.6 SUMMARY
The cache is a small, fast memory placed between the CPU and the main
memory. The system performance can improve dramatically by using cache
memory at a relatively lower cost.
Self-Instructional
Material 273
Memory Organization It is the role of cache controller to determine whether the data desired by
the processor resides in the cache memory or it is to be obtained from the
main memory.
The virtual memory is a concept that permits the user to construct a program
NOTES with size more than the total memory space available to it. This technique
allows user to use the hard disk as if it is a part of main memory. Hence with
this technique, a program with size even larger than the actual physical
memory available can execute.
The objective of virtual memory is to have maximum possible portion of
program in the main memory and the remaining portion of program to reside
on the hard disk.
When we try to access a page in main memory and the page is present
there, we call it a page hit or else page fault.
Cache: A very high speed memory used to increase the speed of processing
by making the current programs and data available to the CPU.
Virtual memory: A technique that allows the execution of processes that
may not be completely in the memory.
.egaP revoC e2.ht nuse
i rethe
daefollowing
h a sa gniwasolaloheader
f eht esin
u the
.2 Cover Page.
YTISREVINUALAGAPPA
APPAGALAUNIVERSITY
Master of Computer Applications
elcyC drihT eht ni )46.3:APGC( CA[Accredited
AN yb edarGwith
’+A’’A+’
htiwGrade
detidby
ercNAAC
cA[ (CGPA:3.64) in the Third Cycle
]CGU-DRHM yb ytisrevinU I–yrogeand
taC Graded
sa dedarasG Category–I
dna University by MHRD-UGC]
300 036 – IDUKIARA
KARAIKUDI
K – 630 003
315 11 NOITACUDE ECNATSIDDIRECTORATE
FO ETAROTCEOF
RIDDISTANCE EDUCATION
itnem sYou
a egaare
p reinstructed
voc eht etatodpupdate
u ot dethe
tcurcover
tsni erpage
a uoYas mentioned below:
.emaN e1.sruIncrease
oC eht fothe
ezifont
s tnosize
f ehof
t esthe
aerCourse
cnI .1 Name.
aP revoC e2.ht nuse
i rethe
daefollowing
h a sa gniwasolaloheader
f eht esin
u the
.2 Cover Page.
ISREVINUALAGAPPA
APPAGALAUNIVERSITY
Master of Computer Applications
rihT eht ni )46.3:APGC( CA[Accredited
AN yb edarGwith
’+A’’A+’
htiwGrade
detidby
ercNAAC
cA[ (CGPA:3.64) in the Third Cycle
]CGU-DRHM yb ytisrevinU I–yrogeand
taC Graded
sa dedarasG Category–I
dna University by MHRD-UGC]
300 036 – IDUKIARA
KARAIKUDI
K
ITACUDE ECNATSIDDIRECTORATE
FO ETAROTCEOF
– 630 003
RIDDISTANCE EDUCATION
DIGITAL COMPUTER ORGANIZATION
I - Semester