Book-Architecture and Organization
Book-Architecture and Organization
DR DATUKUN K. A1
PROF. DR. P. SELLAPPAN2
However, Publisher name and the author shall not be liable for any loss or damage suffered by
ii
PREFACE
Computer Organization is concerned with the structure and behavior of a computer system as
seen by the user. It deals with the components of a connection in a system. Computer
Organization tells us how exactly all the units in the system are arranged and interconnected.
architecture. Computer Organization deals with low-level design issues. Organization involves
Computer Organization is concerned with the structure and behavior of a computer system as
seen by the user. It deals with the components of a connection in a system. Computer
Organization tells us how exactly all the units in the system are arranged and interconnected.
architecture. Computer Organization deals with low-level design issues. Organization involves
This book is intended for beginners who have little or no knowledge of programming. It is also suitable
This text is basically prepared for the use of 300 level computer science students of Plateau State
University Bokkos. Certainly, it is suitable for students of any University for study. Good secondary
schools, colleges and polytechnics may find it useful, particularly for Science, Engineering. It starts with
the basics, but progresses rapidly to the advanced topics such as circuit designs and minimization.
The book is written in a simple, easy-to-read style and contains examples to illustrate the design
concepts presented. It also contains practical demo procedures and outputs to test the reader’s
iii
ACKNOWLEDGEMENTS
We would like to gratefully acknowledge the contributions of several people who have assisted
us in the preparation of this book. we would like to thank all students of Plateau State University
for their show of positive response and understanding of this course material. Academic Staff of
Computer Science Department of Plateau State University are much appreciated as well for their
My grateful thanks also go to Professor, the Vice Chancellor of Plateau State University for the
granted opportunity, freedom, encouragement and support needed in the preparation of this
manuscript. We also thank him for creating and nurturing an environment that actively promotes
learning, research, teamwork and personal development. His dynamic leadership is greatly
appreciated.
Finally, most importantly thank God for giving us the interest, passion, motivation, strength and
Dr Datukun K. A
December 2020
iv
ABOUT THE AUTHORS
Dr Datukun K. A is currently a Lecturer with Plateau State University Bokkos in the department
of Computer Science. Prior to lecturing in Plateau State University Bokkos, he worked as an ICT
Supervisor and programmer II in Salem University Lokoja, who was later converted to a lecturer
with the department of Computer Science, College of ICT. He holds a B.Sc in Computer Science
Science and Technology, Malaysia. He has published several journals, conference papers and
Science and Engineering, and Provost of the Malaysia University of Science and Technology.
Prior to joining Malaysia University of Science and Technology, he held a similar academic
position in the Faculty of Computer Science and Information Technology, University of Malaya,
Malaysia.
He holds a Bachelor in Economics degree with Statistics major from the University of Malaya, a
Master in Computer Science from the University of London (UK), and a PhD in Interdisciplinary
Working in the academia for more than 30 years, he has taught a wide range of courses both at
Programming Languages, Data Structures and Algorithms, System Analysis and Design,
Software Engineering, Human Computer Interaction, Database Systems, Data Mining, Health
v
Prof. Sellappan is an active researcher. He has received several national research grants from the
Ministry of Science and Technology and Innovation under E-Science and FRGS to undertake IT-
related research projects. Arising from these projects, he has published numerous research papers
As a Supervisor, he has supervised more than 70 Master and PhD theses. He also serves in the
editorial/review board of several international journals and conferences. He is also editor of the
Journal of Advanced Applied Sciences and the Plain Truth magazine. Besides, he is a certified
trainer, external examiner, moderator and program assessor for several local and international
universities.
Together with other international experts, he has also served as an IT Consultant for several local
and international agencies such as the Asian Development Bank, the United Nations
Development Program, the World Bank, and the Government of Malaysia. His professional
affiliation includes membership in the Chartered Engineering Council (UK), the British
Computer Society (UK), the Institute of Statisticians (UK), and the Malaysian National
vi
CONTENTS
CHAPTER 1 INTRODUCTION AND OBJECTIVES.................................................. 1
1.0 Introduction…..…………………………………………………… 1-2
1.1 Objectives……………………………………………………..….. 3
vii
4.4.4 Multiplexer………………………………………………………... 42-43
4.4.5 De-multiplexer……………………………………………………. 44
4.4.6 Magnitude Comparator…………………………………………… 45
4.5 Sequential Circuits………………………………………………... 45-46
4.5.1 Clock Signal of a Clock Pulse Generator………………………… 46-47
4.5.2 Flip-Flops………………………………………………………… 47
4.5.2.1 S-R Flip-flops…………………………………………………….. 47-52
4.5.2.2 J-K Flip-Flop……………………………………………………… 52-54
4.5.2.3 D Flip-Flop………………………………………………………... 54-55
4.5.2.4 T Flip-Flop………………………………………………………... 55-57
4.6 Pulses trigger clocked flip-flops………………………………….. 57-58
4.7 Registers…………………………………………………………... 58-59
4.7.1 Storage (Simple) Registers……………………………………….. 59-60
4.7.2 Transformation Registers………………………………………… 60
4.7.3 Shift Register……………………………………………………... 61
4.8 Counters………………………………………………………….. 61-62
REFERENCES …………………………………………………………. 80
viii
CHAPTER 1
1.0 Introduction
Computer Organization and Architecture is the study of internal working, structuring and
refers to the externally visual attributes of the system. Externally visual attributes, here in
computer science, mean the way a system is visible to the logic of programs (not the human
eyes. Organization of computer system is the way of practical implementation which results in
Architecture of computer system can be considered as a catalog of tools available for any
operator using the system, while Organization will be the way the system is structured so that all
History of computer systems, in strict sense of name, will date back to as back as the basic need
for computation among humans. We, however, are more concerned with architecture and
organisation of Electronic computer systems only as 'the computing systems' before this had very
vague (or atleast different!) representation of these terms in their construction. The first among
the electronic computers was The ENIAC, designed by John Mauchly and J. Presper Eckert.
This, although a great achievement altogether, was not of much importance on front of standards
of Architecture and organisation. The programming of this giant machine required manual
change of circuitry by expert individuals by changing connecting wires and lots of switches; It
sure was a tedious task. Besides ENIAC was not a digital machine. It worked on decimal systems
1
A major breakthrough came with the draft of second electronic computer, Electronic Discrete
Variable Automatic Computer (EDVAC). This computer was proposed by John von Neumann
and others in 1945. It used stored program model for computers, wherein all instructions were
also to be stored in memory along being data to be processed thereby removing the need for
change in hardware structure to change the program. The architecture of this computer described
the digital system to be divided into a Processing Unit consisting of an Arithmetic and Logic
Unit and Processor registers, Control Unit consisting of a Program Counter and an Instruction
Register, Memory Unit and Input/output mechanisms. This basic structure of computer system
has since then served as the basic idea for a computer system. The trend continues even today
with few changes in the design. This architecture however is more popular for implementation in
Institute for Advanced Study (IAS) computer, (as Neumann, later, shifted to this project). IAS
computer is the upgraded version of the Electronic Numerical Integrator and Computer (ENIAC)
machine. IAS was designed by von Neumann and was designed with the concept of stored-
program, which allowed the machine operator to store the program along with its input and
output into some memory location, but in ENIAC the program had to be manually entered.
A major advancement in the field of electronics was achieved at Bell labs as William Shockley
invented the transistor. Transistors were devices comparable in purpose to a vacuum tube, but
amazingly small, efficient, and reliable. Transistors revolutionized the organization of a normal
computer system. The systems grew smaller, less power consuming, less heat generating, more
reliable and much more efficient. This generation of computers using transistors as basic
just a beginning as, soon, a new phase took over. Integrated Circuits were developed which could
contain more than one transistors on a single chip. This further reduced size, power consumption
2
and heat generation. This led to development of third generation of computers. After this
transistors on a single IC kept increasing and thereby name of technologies involved kept
changing from MSI to LSI to VLSI to ULSI but the basic structure of IC based computer was
maintained. Although now, a whole computer was available on a single machine, thanks to VLSI
1.1 Objectives
This course is intended to teach the basics involved in data representation and digital logic
circuits used in the computer system. This includes the general concepts in digital logic design,
including logic elements, and their use in combinational and sequential logic circuit design. This
course will also expose students to the basic architecture of processing, memory and i/o
The student will be able to: •Identify, understand and apply different number systems and codes.
•Understand the digital representation of data in a computer system. •Understand the general
concepts in digital logic design, including logic elements, and their use in combinational and
sequential logic circuit design. •Understand computer arithmetic formulate and solve problems,
3
CHAPTER 2
NUMBER SYSTEMS
A number system of base (also called radix) r is a system which has r distinct symbols for r
digits. A string of these symbolic digits represents a number. To determine the quantity that the
number represents, we multiply the number by an integer power of r depending on the place in
which it is located and then find the sum of weighted digits. Based on a positional number
The decimal number system has ten digital symbols, represented as 0,1,2,3,4,5,6,7,8 and 9. Any
decimal number can be represented as a string of these digits. Since there are ten digital symbols
involved, the base or radix of this system is 10. Thus, a string of number 234.5 can be
represented in quantity as 2x102+3x 101+4x 100+5x 10-1 in base ten. In algebra, a decimal
number can be defined as a number whose whole number part and the fractional part is separated
4
by a decimal point. The dot in a decimal number is called a decimal point. The digits following
In mathematics and digital electronics, a binary number is a number expressed in the base-2
numeral system or binary numeral system, which uses only two symbols: typically, "0" and "1".
The base-2 numeral system is a positional notation with a radix of 2. Each digit is referred to as a
bit. Two digital symbols, 0 and 1 represents binary numbers. Strings of these two-digits are
called bits. The base of the binary number system is 2. Converting the value of binary numbers
to the decimal equivalent one have to find its quantity, which is found by multiplying a digit by
22 + 1 x 21 + 0 x 20= 1 x 32 + 0 x 16 + 1 x 8 + 0 x 4 + 1 x 2 + 0 x 1 = 32 + 8 + 2 = 42 10 (in
decimal).
The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to
7. Octal numerals can be made from binary numerals by grouping consecutive binary digits into
groups of three (starting from the right). An octal number system has eight digital represented
number, one has to find the quantity of the octal number. In an Octal number (23.4)8, the
subscript 8 indicates that it is an octal number. Similarly, a subscript 2 will indicate binary, 10,
decimal and H, hexadecimal numbers respectively. In case there is no subscript specified, then
the number should be treated as a decimal number. A Decimal equivalent of an octet number is
given below.
(23.4)8 = 2 x 81 + 3 x 80 + 4 x 8-1
5
= 2 x 8 + 3 x 1 + 4 x 1/8
= 16 + 3 + 0.5
= (19.5)10
The hexadecimal numeral system, often shortened to "hex", is a numeral system made up of 16
symbols (base 16). The standard numeral system is called decimal (base 10) and uses ten
symbols: 0,1,2,3,4,5,6,7,8,9. Hexadecimal uses the decimal numbers and six extra symbols. The
equivalent to 15 in decimal).
= 240 + 2
= (242)10
Numbers are converted from one base to the other, depending on the bases involved. Those
numbers in base ten can be converted to any other base using a particular method. One method is
also used to convert other bases to base ten. Converting one base to the other, other than base ten
would require converting to base ten before converting to the base in question.
For converting a decimal number to binary number, a whole number is continuously divided by 2
until the value of answer is 0 and remainder 1. Then, the result reads from right-most down to the
top. A situation where the computation results ends at an answer of 1 and remainder zero, then
result is read from the last answer through the remainders to the top.
6
2.5.2 Conversion of Binary to Octal and Hexadecimal
The rules for these conversions are straightforward. For converting binary to octal, the binary
number is divided into groups of three from right to left, which are then combined by place value
to generate equivalent octal. For example, the number 1101011 .0012 can be grouped as 001,
101, 011 .001 before interpreting it to decimal as 1, 5, 3, .1=153.110. Note that the number is
unchanged, 0 have been added to complete the grouping on the leftmost end. Also note the style
of grouping before and after the decimal. Three numbers were counted from right to left, while
Tables 1 and 2 describes the binary equivalent of octal and hexadecimal numbers as represented
respectively. The octal is observed to be represented in three digits while that of hexadecimal in
four digits.
7
Table 2: Binary representation of hexadecimal numbers
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
A 1010
B 1011
C 1100
D 1101
E 1110
F 1111
The binary number system is most natural for the computer because of the two stable states (0,1)
of its components. But, this is not a very natural system as we work with the decimal number
system.
8
Then how does the computer do the arithmetic? One of the solutions, which are followed in most
computers, is to convert all input values to binary. Then the computer performs arithmetical
operations and finally converts the results back to the decimal number so that we can interpret it
easily. Is there any alternative to this scheme? Yes, there exists an alternative way of performing
computation in decimal form but it requires that the decimal numbers be coded suitably before
performing these computations. Normally, the decimal digits are coded in 6-8 bits as
alphanumeric characters but for the purpose of arithmetical calculations the decimal digits are
As we know, 2 binary bits can represent 22 = 4 different combinations, 3 bits can represent 23 = 8
combination and 4 bits can represent 24 = 16 combination. To represent decimal digits into
binary form we require 10 combinations only, but we need to have a 4-digit code. One of the
common representations is to use the first ten binary combinations to represent the ten decimal
Let us represent 43.125 in BCD, it is 0100 0011.0001 0010 0101. Compare it with the binary we
But what about alphabets and special characters like +, -, etc? How do we represent these in
computer? A set containing letters of the alphabet (in both cases), the decimal digits (10 in
number) and special characters (roughly 10-15 in number) consist of at least 70 – 80 elements.
One such code generated for this set is popularly used as ASCII (American National Standard
Code for Information Interchange). This code uses 7 bits to represent 128 characters. Now, an
9
Similarly, binary codes can be formulated for any set of discrete elements, e.g. colours, the
spectrum, the musical notes, chessboard positions etc. In addition, these binary codes are also
called bit(s). But how are the arithmetical calculations performed through these bits? How then
are words like ABCD stored in the computer? This section will try to highlight all these points.
Till now we have discussed number systems, BCD and alphanumeric representations. But, how
are these codes actually used to represent data for scientific calculations? The computer is a
discrete digital device and it stores information in flip-flops which are two state (0,1; usually
binary form) devices. Basic requirements of computational data representation in binary form are
representation of signs and magnitudes. If the number is fractional, then we have binary or
decimal point, and Exponent. The solution to sign representation is easy, because sign can be
either positive or negative. Therefore, one bit can be used to represent a sign. By default, it
should be the leftmost bit. Thus, a number of n bit can be represented as n+1 bits number, where
n+1th bit is the sign bit and the rest n bits represent its magnitude as described in Figure 2.
10
The decimal position can be represented by a position between the flip-flops (storage cells in
computer). But how can one determine this decimal position? Well to simplify the representation
1. Fixed representation where the decimal position is assumed either at the beginning or at the
2. Floating point representation where a second register is used to keep the value of exponent
that determines the position of the binary or decimal point in the number. Before discussing these
two representations let us first discuss significant bits and the term “complement” of a number.
These complements of numbers may be used to represent negative numbers in digital computers.
Each digit of a binary number, 0 or 1, is called a bit, an abbreviation for binary digit. Four bits
together is a nibble; 8 bits is called a byte. (8, 16, 32, 64 bit arrangements are also called words)
The rightmost bit is called the Least Significant Bit (LSB) while the leftmost bit is called the
Most Significant Bit (MSB). The schematic in Figure 3 below illustrates the general structure of
a binary number and the associated labels, which represents the significant bits.
11
2.7 Number Complement
There are two types of complements for a number of base r; these are called r’s complement and
(r- 1)’s complement. For example, for decimal numbers the base is 10, therefore, complements
will be 10’s complement and (10-1) = 9’s complements. For binary numbers we talk about 2’s
and 1’s complements. But how are we to obtain complements and what do these complements
mean? Let us discuss these issues with the help of the following example:
Example 2: Find the 9’s complement and 10’s complement for the decimal number 256.
Solution
9’s complement: The 9’s complement is obtained by subtracting each of the numbers from 9
(the highest digit value). Similarly, for obtaining 1’s complement for a binary number we have to
subtract each binary digit of the number from the digit 1 in the same manner as given in the
example 3.
10’s complement: adding 1 in the 9’s complement produces the 10’s complement:
Please note that on adding the number and its 9’s complement we get 999 (for this three digit
numbers) while on adding the number and its 10’s complement we get 1000. Example 3: Find
The fixed-point numbers in binary use a sign bit. A positive number has a sign bit 0, while the
negative number has a sign bit 1. In the fixed-point numbers we assume that the position of the
binary point is at the end. It implies that all the represented numbers should be integers. A
negative number can be represented in one of the following ways; Signed magnitude
12
(Assumption size of register = 7 bit, the 8th bit is used for error checking and correction or other
purposes).
The complexity of arithmetical additions is dependent on the representation which has been
followed. Let us discuss this with the help of the following example:
Example: Add 25 and – 30 in binary using a 7-bit register in signed magnitude representation;
0011110, we therefore say that – 30 is 1 011110. To do the arithmetical addition with one
negative number we have to check the magnitude of the numbers. The number which has the
smaller magnitude is then subtracted from the bigger number and the sign of bigger number is
selected. The implementation of such a scheme in digital hardware will require a long sequence
of control decisions as well as circuits that will add, compare and subtract numbers. Is there a
better alternative than this scheme? Let us first try the signed 2’s complement. -30 in signed 2’s
complement notation will be +30 which is 0 011110. -30 is 1 100010 (2’s complement of 30
Another possibility, which is also simple, is the use of signed 1’s complement. Signed 1’s
complement has a rule. Add the two numbers, including the sign bit. If carry of the most
significant bit or sign bit is one, then increment the result by 1 and discard the carry over. Let us
13
Another interesting feature about these representations is the representation of 0. In the signed
magnitude and 1’s complement there are two representations for zero as: Signed magnitude; + 0
Thus, both +0 and -0 are the same in 2’s complement notation. This is an added advantage in
favour of 2’s complement notation. The highest numbers, which can be accommodated in a
register, also depend on the type of representation. In general, in an 8-bit register, 1 bit is used as
sign. Therefore, the remaining 7 bits can be used for representing the value. The highest and the
lowest number, which can be represented, are: For a signed magnitude representation 27 – 1 to –
(27 – 1)= 128 – 1 to – (128 – 1). = 127 to – 127. For signed 1’s complement 127 to – 127.
But, for signed 2’s complement we can represent +127 to -128. The -128 is represented in signed
The subtraction can be easily done using the 2’s complement by taking the 2’s complement of
the subtracted (inclusive of sign bit) and then adding the two numbers. Signed 2’s complement
provides a very simple way for adding and subtracting two numbers, thus, many computers
(including IBM PC) adopt signed 2’s complement notation. The reason why signed 2’s
complement is preferred over signed 1’s complement is because it has only one representation
for zero.
2.12 Overflow
An overflow is said to have occurred when the sum of two n digits number occupies n+ 1 digit.
This definition is valid for both binary as well as decimal digits. But what is the significance of
an overflow for binary numbers since it is not a problem for the cases when we add two
numbers? Well the answer is in the limits of representation of numbers. Every computer employs
14
a limit for representing numbers. For instance, if we are using 8-bit registers for calculating the
sum. But what will happen if the sum of the two numbers can be accommodated in 9 bits? Where
are we going to store the 9th bit?. overflow also implies that the calculated results might be
erroneous.
A decimal digit is represented as a combination of four bits; thus, a four-digit decimal number
will require 16 bits for decimal digits representation and an additional 1 bit for a sign. Normally
to keep the conversion of one decimal digit to 4 bits, the sign sometimes is also assigned a 4-bit
code. This code can be the bit combination which has not been used to represent decimal digits;
e.g., 1100 may represent plus and 1101 can represent minus. Although this scheme wastes
considerable amount of storage space, it does not require the conversion of a decimal number to
binary. Thus, it can be used at places where the amount of computer arithmetic is less than the
amount of input/output of data; e.g., calculators or business data processing. The arithmetic in
decimal can also be performed as in binary except that instead of signed 1’s complement, signed
9’s complement is used and instead of signed 2’s complement signed 10’s complement is used.
The floating-point number representation consists of two parts. The first part of the number is a
signed fixed-point number, which is termed mantissa, and the second part specifies the decimal
or binary point position and is termed an exponent. The mantissa can be an integer or a fraction
Please note that the position of the decimal or binary point is assumed and it is not a physical
point. Therefore, wherever we are representing a point it is only the assumed position. Example
15
CHAPTER 3
DIGITAL COMPUTERS
A Digital computer can be considered as a digital system that performs various computational
tasks. The first electronic digital computer was developed in the late 1940s and was used
primarily for numerical computations. By convention, the digital computers use the binary
number system, which has two digits: 0 and 1. A binary digit is called a bit. A computer system
is subdivided into two functional entities: Hardware and Software. The hardware consists of all
the electronic components and electromechanical devices that comprise the physical entity of the
device. The software of the computer consists of the instructions and data that the computer
The Boolean algebra is an attempt to represent the true-false logic of humans in mathematical
form. George Boole proposed the principles of the Boolean algebra in 1854, hence the name
Boolean algebra. Boolean algebra is used for designing and analyzing digital circuits. The rules
of Boolean algebra help in analyzing or designing digital circuits. These rules are stated as
follows:
A variable in Boolean algebra can take only one of two values; 1 (TRUE) or 0 (FALSE). For
example, a variable say A could take either 1(true) or 0 (false) values. Other examples of
The three basic operations in Boolean algebra are AND, OR and NOT. Operations involving
B can be represented in tabular form, which is referred to as the “truth table”. In addition, three
more operators have been defined for Boolean algebra: XOR (Exlusive OR), NOR (Not OR) and
NAND (Not AND). However, for designing and analyzing a logical circuit, it is convenient to
use AND, NOT and OR operators because AND and OR obey many laws as of multiplication
1. Commutative Law: States that the interchanging of the order of operands in a Boolean
equation does not change its result. For example, OR operator → A + B = B + A, AND
operator → A * B = B * A
2. Associative Law of multiplication: States that the AND operation are done on two or
3. Distributive Law: States that the multiplication of two variables and adding the result
with a variable will result in the same value as multiplication of addition of the variable
17
10. De Morgan's Law is also known as De Morgan's theorem, works depending on the
concept of Duality. Duality states that interchanging the operators and variables in a
function, such as replacing 0 with 1 and 1 with 0, AND operator with OR operator and
De Morgan stated 2 theorems, which will help us in solving the algebraic problems in digital
i. "The negation of a conjunction is the disjunction of the negations", which means that the
ii. "The negation of disjunction is the conjunction of the negations", which means that
complement of the sum of two variables is equal to the product of the complement of
A Boolean function is defined as an algebraic expression formed with the binary variables, the
logic operation symbols, parenthesis, and equal to sign. For example, F = A.B+C is a Boolean
function. The value of Boolean function, F can be either 0 or 1. A Boolean function can be
broken into logic diagram and vice versa. Therefore, if we code the logic operations in Boolean
algebraic form and simplify this expression we will design the simplified form of the logic
circuits. Given the function F1 = xyz', the Boolean algebra simplification using Logic gates can
18
Figure 4a: A boolean function in a diagram
Digital systems are constructed using logic gates. A logic gate is an electronic circuit that
produces a typical output signal depending on its input signal. The output signal of a gate is a
simple Boolean operation of its input signal(s). Gates are the basic logic elements. These gates
include AND, OR, NOT, NAND, NOR, EXOR and EXNOR gates. Any Boolean function can be
represented in the form of gates. Figure 4b describe the Logic diagram of an operation with AND
gate. The characteristic table is described in Table 3. The AND gate is an electronic circuit that
gives a high output (1) only if all its inputs are high. A dot (.) is used to show the AND operation
i.e. A.B. Bear in mind that this dot is sometimes omitted i.e. AB.
19
Inputs Output
A B AB
0 0 0
0 1 0
1 0 0
1 1 1
The OR gate is an electronic circuit that gives a high output (1) if one or more of its inputs are
high. A plus (+) is used to show the OR operation. This is described in Figure 5. The
Inputs Output
A B A+B
0 0 0
0 1 1
1 0 1
20
1 1 1
The NOT gate is an electronic circuit that produces an inverted version of the input at its output.
It is also known as an inverter. If the input variable is A, the inverted output is known as NOT A.
This is also shown as A', or A with a bar over the top, as shown at the outputs in Figure 6. The
Inputs Output
A A1
0 1
1 0
A NAND gate is a NOT-AND gate which is equal to an AND gate followed by a NOT gate. The
outputs of NAND gates are high for any of the inputs that are low and verse vasa. The symbol is
an AND gate with a small circle on the output. The small circle represents an inversion. Figure 7
describes the logic diagram of the NAND gate operation. The Characteristic table is shown in
Table 6.
21
Figure 7: Logic diagram of NAND gate
Input Output
A B (A.B)1
0 0 1
0 1 1
1 0 1
1 1 0
A NOR gate is a NOT-OR gate which is equal to an OR gate followed by a NOT gate. The
outputs of NOR gates are low for any of the inputs that are high. The symbol is an OR gate with
a small circle on the output. The small circle represents an inversion. Figure 8 describes the
logical diagram of NOR gate. Table 7 describes the characteristics table of the figure.
22
Inputs Output
A B A1or B1
0 0 1
0 1 0
1 0 0
1 1 0
The 'Exclusive-OR' gate is a circuit which will give a high output if either, but not both, of its
two inputs are high. An encircled plus sign (+) is used to show the EOR (EXOR gate) operation
Input Output
A B A*B
0 0 0
0 1 1
1 0 1
1 1 0
23
The 'Exclusive-NOR' gate circuit does the opposite of the EOR gate. It will give a low output if
either, but not both of its two inputs are high. The symbol is an EXOR gate with a small circle
on the output. The small circle represents an inversion. The NAND and NOR gates are called
universal functions, since they can be generated from either AND, OR and NOT functions.
NB: A function in sum of products form can be implemented using NAND gates by replacing
all AND and OR gates by NAND gates. A function in product of sums form can be
implemented using NOR gates by replacing all AND and OR gates by NOR gates. ENOR gate is
Input Output
A B A ENOR B
0 0 1
0 1 0
1 0 0
1 1 1
Below is a summary truth table of the input/output combinations for the NOT gate together with
all possible input/output combinations for the other gate functions. Also note that a truth table
24
with 'n' inputs has 2n rows (possible occurrences). You can compare the outputs of different
gates. The Logic gate symbols can be summarized in Figure 11. It is followed by the
Inputs Outputs
A B AND NAND OR NOR EXOR EXNOR NOT NOT
A B
0 0 0 1 0 1 0 1 1 1
0 1 0 1 1 0 1 0 1 0
1 0 0 1 1 0 1 0 0 1
1 1 1 0 1 0 0 1 0 0
25
The truth (characteristics) table of NAND and NOR can be made from NOT (A AND B) and
NOT (A OR B) respectively. Exclusive OR (XOR) is a special gate whose output is 1, only if the
inputs are not equal. The inverse of exclusive OR can be a comparator which will produce a 1
The digital circuit use only one or two types of gates for simplicity in fabrication purposes.
Therefore, one must think in terms of functionally complete sets of gates. What does a
functionally complete set imply? A set of gates by which any Boolean function can be
implemented is called a functionally complete set. The functionally complete sets are: (AND,
The simplification of the Boolean expressions via maps are very useful for combinational circuit
designs. The Map method involves a simple, straightforward procedure for simplifying Boolean
expressions. Map simplification may be regarded as a pictorial arrangement of the truth table
which allows an easy interpretation for choosing the minimum number of terms needed to
express the function algebraically. The map method is also known as Karnaugh map or K-map
beside others. There are three methods used to simplify algebraic expressions. These are simple
Algebraic simplification, Karnaugh maps and Quine McCluskey. In this course, basic use of
karnaugh maps and simple simplification of algebraic expressions will be carried out.
An algebraic expression can exist in two forms: Sum of products (SOP) e.g. (A.¬B) + (¬A. ¬B)
and Product of sums (POS) e.g. (¬A +¬B). (A+B). If a product of SOP expression contains every
variable of that function either in true or complement form, then it is defined as a minterm. This
minterm will be true only for one combination of input values of the variables. For example, in
26
the SOP expression; F(A,B,C) = (A.B.C) + (¬A. ¬B.C) + (A.B). We have three product terms,
namely A.B.C, ¬A. ¬B.C and A.B. But only the first two of them qualify to be a minterm as the
third one does not contain variable C or its complement. In addition, the term A.B.C will be one
only if A=1, B=1 and C=1. For any other combination of values of A, B,C the minterm will have
zero value. Similarly, the minterm ¬A. ¬B.C will have value 1 only if ¬A = 1 i.e. A=0, ¬B=1 i.e.
B=0 and C=1. For any other combination of values the minterm will have a zero value.
A similar type of term used in POS form is called maxterm. Maxterm is a term of POS
expression, which contains all the variables of the function in true or complemented form. For
example, F (A, B, C) =(A+B+C). (¬A+¬B+C) have two maxterms. A maxterm have a value 0
for only one combination of input values. The maxterm A+B +C will be 0 value only for A=0,
B=0 and C=0. For all other combination of values of A, B, C it will have a value one. Now let us
The algebraic functions can appear in many different forms. Although the process of
simplification exists yet it is cumbersome because of the absence of routes, which tell what rule
to apply next. The Karnaugh map is a simple direct approach of simplification of logical
expressions.
There are a finite number of Boolean functions of n input variables, yet and infinite number of
possible logical expressions you can construct with that n input values. Clearly, there are an
infinite number of logical expressions that are equivalent. That is, they produce the same results
given the same inputs. To help eliminate possible confusion, logic designers generally specify a
Boolean function using a canonical or standard form. For any given Boolean function, there
exists a unique canonical form. These eliminates some confusion when dealing with Boolean
27
functions. In actual sense, there are several canonical forms. But only two forms will be
discussed in this material; minterms and maxterms. Using duality principle, it is easy to convert
between these two. A term is a variable or a product (logical AND) of several different literals.
For example, if you have two variables, A and B, there are eight possible terms: A, B, A1, B1,
A1B1, A1B1 AB1, and AB. For three variables we have 26 different terms: A, B, C, A1, B1, C1,
A1B1, A1B, AB1, AB, A1C1, A1C, AC1, AC, B1C1, B1C, BC1, BC, A1B1C1, A1B1C, A1BC1,
AB1C1, A1BC, AB1C, ABC1 and ABC. As you can see, as the number of variables increases, the
number of terms increases dramatically. A minterm is a product containing exactly n literals. For
example, the minterms for two variables are A1B1, AB1, A1B, and AB (four minterms). Likewise,
the minterms for three variables A, B, and C are A1B1C1, A B1C1, A1B C1, AB C1, A1B1C, A
B1C, A1BC, and ABC (eight minterms). In general, there are 2n minterms for n variables. The set
of possible minterms is very easy to generate since they correspond to the sequence of binary
numbers. Example, ABC could be represented in binary as 111 and A1B1C1 as 000. Exercise:
Karnaugh maps can be used to construct a circuit when the input and output to that proposed
circuit are defined. For each output one Karnaugh map needs to be constructed.
There are four min-terms in a two variable map. Therefore, the map consists of four squares, one
for each min-term. The 0's and 1's marked for each row, and each column designates the values
28
Figure 12:Two-variable-4minterms map
Representation of functions in the two-variable map is shown in Figure 13; a and b with their
In the case of three variables, there are eight min-terms in a three-variable map. Therefore, the
map consists of eight squares. The map drawn could be described following the scene in Figure
12 and 13.
Step 1: Create a simple map depending on the number of variables in the function.
29
Decimal equivalents of the cells are given for help in understanding where the position of
the respective decimal equivalent is. It is not the value filled in a square. A square can
The 00, 01, 11 etc. written on the top implies the value of respective variables.
Wherever the value of a variable is zero it is said to represent its complement form.
The value of only one variable changes when we move from one row to the next row or
Step 2: The next step in the Karnaugh map is to map the truth table into the map. The mapping is
done by putting a 1 in the respective squares belonging to the 1 value in the truth table. This
mapped map is used to arrive at simplified Boolean expressions, which then can be used for
Step 3: Now create simple algebraic expressions from the Karnaugh map. These expressions are
created by using adjacency if we have two adjacent 1’s then the expressions for those can be
simplified together since they differ in only one variable. Similarly, we search for adjacent pairs
of four, eight, and so on. A 1 can appear in more than one adjacent pairs. You must find
The expressions so obtained through the Karnaugh map are in the form of the sum of the product
form, i.e. it is expressed as a sum of the products of the variables. The expression is one of the
minimal solutions.
A method was suggested to deal with the increasing number of variables. This is a tabular
approach known as the Quine-Mckluskey method. This method is suitable for programming and
hence provides a tool for automating designs in the form of minimized Boolean expressions. The
30
basic principle behind the Quine-Mckluskey method is to remove the terms which are redundant
and can be obtained by other terms. Discussions on this method are beyond this course.
31
CHAPTER 4
Digital Logic Circuits are categorized into two; combinational and sequential. One has memory
Combinational circuits are interconnected circuits of gates according to a certain rule to produce
an output independent of its input value. By default, combinational circuit have no memory,
timing, feedback loops within their design. Their outputs are independent on their immediate or
present inputs. A combinational circuit comprises of logic gates whose outputs at any time are
determined directly from the present combination of inputs without any regard to previous
Figure 14 describes the Categories of combinatory circuits while Figure 14b describe the block
diagram of combinational circuits. The code converter has been discussed earlier. The arithmetic
and logical function and data transmission segments will be discussed next. Combinational
32
Figure 14: Categories of combinatory circuits
The three (3) main ways of Specifying the function of a combinational logic circuit:
1. Boolean Algebra; algebraic expressions showing the operation of the logic circuit for
each input variable either true or false that results in a logic 1 output
2. Truth Table; defines the function of a logic gate by providing a concise list that shows all
the output states in tabular form for each possible combination of input variable that
could encounter
3. Logic diagram; This is a graphical representation of a logic circuit that shows the wiring
and connections of each individual logic gate, represented by a specific graphical symbol,
33
Based on Figure 14b, the 'n' input variables come from an external source whereas the 'm' output
variables go to an external destination. In many applications, the source or destination are storage
registers.
2. The total number of available input variables and required output variables is determined.
3. The input and output variables are allocated with letter symbols.
4. The exact truth table that defines the required relationships between inputs and outputs is
derived.
The combinational circuit that performs the addition of two bits is called a half adder and the one
that performs the addition of three bits (two significant bits and a previous carry) is a full adder.
These are Adders (Full Adders, Half Adders), Encoders, Decoders, Multiplexers, De-
Multiplexers, Magnitude Comparators etc. Few of these will be looked into in this course.
4.4.1 Adder
An adder or summer is a digital circuit that performs addition of numbers. We have two types;
half and full adder. Half adder discards carry while full adder considers it.
34
4.4.1.1 Half Adder
The half adder adds two single binary digits A and B. It has two outputs, sum (S) and carry (c). it
does not account for the ‘carry’ value. the Logic diagram is shown in Figure 15. Figure 16
Input Output
A B S C
0 0 0 0
0 1 1 0
1 0 1 0
35
1 1 0 1
Adds binary numbers and accounts for values carried in as well as out. Figure 17
describe the logic diagram of a full adder. A, E Cin constitutes the inputs while S and
Cout constitutes the outputs. The symbolic diagram is shown in Figure 18. The
36
Table 12: Characteristic table for full adder
Inputs Outputs
A B Cin Cout S
0 0 0 0 0
0 1 0 0 1
1 0 0 0 1
1 1 0 1 0
0 0 1 0 1
0 1 1 1 0
1 0 1 1 0
1 1 1 1 1
Adders are designed based on the number of bits they intend to add. A 4-bit adder will add say
two set of 4-bits binary numbers as describe in Figure 19. Figure 20 describes a situation where
subtraction is carried out for 4-bits. Figure 20 is a modification of Figure 19, performing both
37
Figure 19: 4-bits Adder
4.4.2 Encoders
A device that converts a single positional signal to binary codes. For 2n inputs, we have n
outputs. For example, 4 inputs will produce 2 outputs. Figure 21 describe an encoder of 2n-1
38
inputs, producing n-1 outputs. It is the symbolic diagram of a typical encoder. The logic diagram
is seen in Figure 22. The characteristic table is in Table 13, from which Boolean expressions are
derived. An encoder can also be described as a combinational circuit that performs the inverse
operation of a decoder. An encoder has a maximum of 2^n (or less) input lines and n output
lines. In an Encoder, the output lines generate the binary code corresponding to the input value.
Figure 22 shows the logic diagram of an 8 *3 encoder with 8 input and 3 output lines.
39
From Table 13, a Boolean expression Yo=X1+X3+X5+X7 is derived, where X is obtainable for
‘1’ value along its column and across X columns. Let students derive the other relations in terms
of Y1 and Y2.
X X7 X6 X5 X4 X3 X2 X1 X0 Y2 Y1 Y0
0 0 0 0 0 0 0 0 1 0 0 0
1 0 0 0 0 0 0 1 0 0 0 1
2 0 0 0 0 0 1 0 0 0 1 0
3 0 0 0 0 1 0 0 0 0 1 1
4 0 0 0 1 0 0 0 0 1 0 0
5 0 0 1 0 0 0 0 0 1 0 1
6 0 1 0 0 0 0 0 0 1 1 0
7 1 0 0 0 0 0 0 0 1 1 1
4.4.3 Decoder
A device that converts binary codes to Single positional signal. A Decoder can be described as a
combinational circuit that converts binary information from the 'n' coded inputs to a maximum of
2^n different outputs. Figure 23 describe the logic diagram of a 3x8 decoder. The characteristic
table is in Table 13. For n input signal, there will be 2n output signal. The most preferred or
commonly used decoders are n-to-m decoders, where m<= 2^n. An n-to-m decoder has n inputs
and m outputs and is also referred to as an n * m decoder. Figure 23 is 3-to-8 line decoder with
40
three input variables which are decoded into eight output, each output representing one of the
Based on Table 13, Y7=Xo+X1+X2. Students should attempt deriving the boolean relations for
X2 X1 Xo Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0
0 0 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
41
0 1 0 0 0 0 0 0 1 0 0
0 1 1 0 0 0 0 1 0 0 0
1 0 0 0 0 0 1 0 0 0 0
1 0 1 0 0 1 0 0 0 0 0
1 1 0 0 1 0 0 0 0 0 0
1 1 1 1 0 0 0 0 0 0 0
4.4.4 Multiplexer
information from one of the 2^n input data lines and directs it to a single output line. The
selection of a particular input data line for the output is decided on the basis of selection lines.
The multiplexer is often called as data selector since it selects only one of many data inputs. The
multiplexer is one of the basic building units of a computer system, which, in principle allows
sharing of a common line by more than one input lines. It connects multiple input lines to a
single output line. At a specific time one of the input lines is selected and the selected input is
passed on to the output line. It is also seen as a device that allows several inputs to produce a
single output. One out of several inputs is selected and connected to a single output. A remote
explained. Figure 25 describe the logic diagram accordingly; N input to Q (1) output. Table 24
describe the truth table of multiplexer. To construct a multiplexer with 8 inputs, the dimension of
the inbuilt decoder will be 3x8. A decoder is an integral part of the multiplexer. Given 2n, n is the
number of bits on the address line. A 2^n-to-1 multiplexer has 2^n input data lines and n input
selection lines whose bit combinations determine which input data are selected for the output.
42
Figure 24: Symbolic (Block) diagram
N N-1 N1 N0 Q
0 0 0 1 1
0 0 1 0 1
0 1 0 0 1
1 0 0 0 1
43
4.4.5 De-multiplexer
reverse operation of a Multiplexer. A De-multiplexer has a single input, 'n' selection lines and a
maximum of 2^n outputs. It has a single input with several outputs. This is the opposite of
the logic diagram. Students should attempt producing the truth table.
44
4.4.6 Magnitude Comparator
A magnitude comparator is a hardware electronic device that takes two numbers as inputs
in binary form and determines whether one number is greater than, less than or equal to
the other number. Figure 28 describe the logic diagram of a typical magnitude
These are logic circuits whose present output depends on the past inputs. These circuits stores
and remember information. The sequential circuits unlike combinational circuits are time
dependent. Normally the current output of a sequential circuit depends on the state of the circuit
and on the current input to the circuit. It is a connection of flip-flops and gates. What is a flip-
45
flop? You will find the answer in this section. It is seen as circuits whose outputs depends on
past inputs. They can be used as storage elements. Examples are counters, flip-flops and
· Synchronous
· Asynchronous
Synchronous circuits use flip-flops and their status can change only at discrete instants. (Don’t
they seem a good choice for discrete digital devices such as computers?). The asynchronous
sequential circuits may be regarded as combinational circuits with feedback path. Since the
proportion delays of output to input are small, they may tend to become unstable at times. The
synchronous circuits in sequential circuits can be achieved using a clock pulse generator. It
synchronizes the effect of input over output. It presents signal as in Figure 30.
The signal produced by a clock pulse generator is in the form of a clock pulse or clock signal.
These clock pulses are distributed throughout the computer system for synchronisation.
46
A clock can have two states: an enable or active state, otherwise a disable or inactive state. Both
of these states can be related to zero or one levels of clock signals (it depends on
implementation). Normally, the flip-flops change their state only at the active state of the clock
pulse. In certain designs the active state of the clock is triggered when the transition (this is
signal. A typical CPU is synchronized by a clock signal whose frequency is used as a basic
measure of the CPU’s speed of operation and hence you might have heard the term “CPU
4.5.2 Flip-Flops
What is a flip-flop? A flip-flop is a binary cell, which can store a bit of information and which in
itself is a sequential circuit. But, how does it do it? A flip-flop maintains any one of the two
stable states that can be treated as zero or one depending on the presence and absence of output
signals. Flip-flops are seen as bi-stable device used in storing 1 bit of information. By bi-stable,
we mean it can exist in two mutual exclusive state. If the state is 1, it stores 1. If the state is 0, it
stores 0. The state of a flip-flop can only change when a clock pulse has arrived. Let us first see
the basic flip-flop or a latch that was a revolutionary step in computers. The basic latch presented
here is asynchronous.
The theoretically SR and RS flip-flops are same. When both S & R inputs are high the output is
outputs to all conditions of the flip-flop. Hence, RS and SR flip-flops were designed. The logic
diagram of a S-R latch, which is a basic flip-flop is shown in Figure 31(a and b) while the actual
flip-flop is shown in Figure 32. The block diagram is described in Figure 33 and truth table in
47
Table 25, based on NOR gate. S-R Flip-flop/Basic Flip-Flop could be designed based on NOR
Flip flops are an application of logic gates. A flip-flop circuit can remain in a binary state
indefinitely (as long as power is delivered to the circuit) until directed by an input signal to
switch states. S-R flip-flop stands for SET-RESET flip-flops. The SET-RESET (S-R) flip-flop
consists of four NAND gates whereas RESET-SET (R-S) Flip-flops consists of two NOR gates
and two AND gates as described in Figure 33. When the NOR gates are replaced with NAND
gates, the position of S and R and that of the outputs changes also. Though they carry out the
same function. The design of these flip flops includes two inputs, called the SET [S] and RESET
48
Figure 31b: S-R Latch
A flip-flop can be constructed using two AND and NOR gates; it contains a feedback loop. The
flip-flop in the figure has two inputs R (Reset) and S (set) and two outputs Q and ¬Q. In a
normal mode of operation both the flip-flop inputs are at zero i.e. S = 0 & R = 0. This flip-flop
can show two states: either the value Q is 1 (therefore ¬Q = 0) we say the flip-flop is in set state
or the value of Q is 0 (therefore ¬Q = 1) we call it a clear state. Let us see how the S and R input
can be used to set and clear the state of the flip-flop. The first question is, why in normal cases S
and R are zero? The reason is that this state does not cause any change in state. Suppose the flip-
flop was in set state i.e. Q = 1 and ¬Q = 0 and as S = 0 and R = 0, the output of ‘a’ NOR gate
will be 1 since both its input ¬Q and R are zero (refer the truth table of 1 NOR gate in the Figure)
and ‘b’ NOR gate will show output as 0 as one of its input Q is 1. Similarly, if flip-flop was in
clear state then ¬Q = 1 and R = 0, therefore, output of ‘a’ gate will be 0 and ‘b’ gate 1. Thus,
flip-flop maintains a stable state at S = 0 and R = 0. The flip-flop is taken to set state if the S
input momentarily goes to 1 and then goes back to 0. R remains at zero during this time. What
happens if, say initially, the flip-flop was in state 0 i.e. the value of Q was 0. As soon as S
becomes 1 the output of NOR gate ‘b’ goes to 0 i.e. ¬Q becomes 0 and almost immediately Q
49
becomes 1 as both the input (¬Q and R) to NOR gate ‘a’ become 0. The change in the value of S
back to 0 does not change the value of Q again as the input to NOR gate‘b’ now are Q = 1 and S
= 0. Thus, the flip-flop stays in the set state even after S returns to zero. If the flip-flop was in
state 1 then, when S goes to 1 there is no change in value of ¬Q as both the inputs to NOR gate
‘b’ are 1 at this time. Thus, ¬Q remains in state 0 or in other words flip-flop stays in the set state.
If R input goes to value 1 then flip-flop acquires the clear state. On changing momentarily, the
value of R to 1 the Q output changes to 0 irrespective of the state of flip-flop and as Q is 0 and S
is 0 the ¬Q becomes 1. Even after R comes back to value 0, Q remains 0 i.e., flip-flops come to
the clear state. What will happen when both S and R go to 1 at the same time? Well, this is the
situation which may create a set or clear state depending on which of the S and R stays longer in
zero state. But meanwhile both of them are 1 and the value of Q and ¬Q becomes 1 which
implies that both Q and its complement are one, an impossible situation. Therefore, the transition
Let us try to construct a synchronous S-R flip-flop form the basic latch. The clock pulse will be
used to synchronise the flip-flop. S-R flip-flop: The main feature in S-R flip-flop is the addition
of a clock pulse input. In this flip-flop a change in the value of S or R will change the state of the
flip-flop only if the clock pulse at that moment is one. The logic diagram of S-R flip-flop is
depicted in Figure 32. The block diagram is shown in Figure 33. Table 25 describe the truth
table.
The operation of a basic flip-flop can be modified by providing an additional control input that
determines when the state of the circuit is to be changed. The limitation with a S-R flip-flop
using NOR and NAND gate is the invalid state. This problem can be overcome by using a stable
50
SR flip-flop that can change outputs when certain invalid states are met, regardless of the
S R State at completion
0 0 0
1 0 1
0 1 0
1 1 Undefined
0 0 0
1 0 1
51
0 1 0
1 1 Undefined
The value ¬Q can be acquired as an additional output that is in complemented form. The
excitation or characteristic table basically represents the effect of S and R inputs on the state of
the flip-flop, irrespective of the current state of the flip-flop. The other two inputs P (preset) and
C (clear) are asynchronous inputs and can be used to set the flip-flop or clear the flipflop
respectively at the start of operation, independent of the clock pulse. Let us have a look at some
J-K flip-flop can be considered as a modification of the S-R flip-flop. The main difference is that
the intermediate state is more refined and precise than that of an S-R flip-flop. The
characteristics of inputs 'J' and 'K' is same as the 'S' and 'R' inputs of the S-R flip-flop. J stands
for SET, and 'K' stands for CLEAR. When both the inputs J and K have a HIGH state, the flip-
flop switches to the complement state, so, for a value of Q = 1, it switches to Q=0, and for a
The basic drawback with the R S flip-flop is that one set of input conditions are not utilized and
this can be used with a little change in the circuit. In this flip-flop the last combination is used to
complement the state of the flip-flop. After discussing some of the simple sequential circuits, that
is flip-flop let us discuss some of the complex sequential circuits, which can be developed using
simple gates, and flip-flops. This type of flip-flop permits high signal. That is, J=K=1. Unlike S-
R for which both S and R cannot be 1. JK is more universal. Both S-R and J-K are uniquely used
in different situations. Figure 34 describe the logic diagram of a J-K flip-flop. Figure 35 describe
J K State at completion
0 0 0
1 0 1
0 1 0
1 1 Complement ff state
53
0 0 0
1 0 1
0 1 0
1 1 Complement ff state
4.5.2.3 D Flip-Flop
flop in the sense that it represents the currently applied input as the state of the flip-flop. Thus, in
effect it can store 1 bit of data information and is sometimes referred to as Data flipflop. Please
note that the state of the flip-flop changes for the applied input. It does not have a condition
where the state does not change as the case in RS flip-flop, the state of R-S flip-flop does not
change when S = 0 and R = 0. If we do not want a particular input state to change then either the
clock is to be disabled during that period or a feedback of the output can be embedded with the
input D. D flip-flop is also referred to as Delay flip-flop because it delays the 0 or 1 applied to its
input by a single clock pulse. Figure 36 describe the logic diagram of D FF. Figure 37 describe
the block diagram. From the logic diagram, the D input is connected to the S input and the
When the value of CP is '1' (HIGH), the flip-flop moves to the SET state. If it is '0' (LOW), the
flip-flop switches to the CLEAR state. Table 26 is the truth table of D-fliflops.
54
Figure 36: Logic diagram of D-FF
Q D Q(t+1)
0 0 0
0 1 1
1 0 0
1 1 1
55
4.5.2.4 T Flip-Flop
T flip-flop are similar to JK flip-flops. T flip-flops are single input version of JK flip-flops. This
is modified form of JK flip-flop, obtain by connecting both inputs; j & k together. The flip-flop
has only one input along with clock pulse. This flip-flops are called T flip-flop because their
ability to complement its state that is toggle, so they are called toggle flip-flops. Figure 38
depicts the logical diagram while Figure 39 depicts the block diagram. T flip-flop is a much
simpler version of the J-K flip-flop. Table 27 is the truth table of T-flipflops. The state of the
flip-flop is changed by a momentary change in the input signal. This momentary change is
known as Trigger, and the transition it causes is said to triggering the flip-flop.
56
Figure 39: Symbolic diagram of T flip-flop
Q D Q(t+1)
0 0 0
0 1 1
1 0 1
1 1 0
A pulse start from the initial value of '0', goes momentarily to '1', and after a short while, returns
to its initial '0' value. A clock pulse is either positive or negative. A positive clock source remains
at '0' during the interval between pulses and goes to 1 during the occurrence of a pulse. The pulse
goes through two signal transition: from '0' to '1' and return from '1' to '0'. Figure 39b describes
the transition through which pulse’s signal goes. The positive transition is defined as a positive
57
Figure 39b: Signal Pulse Transition
Assignment: Design the clock and next-state diagrams of S-R, J-K, D and T flip-flops.
4.7 Registers
A Register is a fast memory used to accept, store, and transfer data and instructions that are
being used immediately by the CPU. A Register can also be considered as a group of flip-flops
with each flip-flop capable of storing one bit of information. A register with n flip-flops is
capable of storing binary information of n-bits. The flip-flops contain the binary information
whereas the gates control the flow of information, i.e. when and how the information are
transferred into a register. Different types of registers are available commercially. A simple
The transfer of new data into a register is referred to as loading the register. A register is a
binary function which holds the binary information in digital form. Thus, a register consists of a
group of binary storage cells. A register consists of one and more flip-flops depending on the
number of bits to be stored in a word. A separate flip-flop is used for storing a bit of a word. In
addition to storage, registers are normally coupled with combinational gates enabling certain data
58
processing tasks. Thus, a register in a broad sense consists of the flip-flop that stores binary
information and gates, which controls when and how information is transferred to the register.
Normally in a register independent data lines are provided for each flip-flop, enabling the
transfer of data to and from all flip-flops to the register simultaneously. This mode of operation
is called Parallel Input- Output. Since the stored information in a set of flip-flops is treated as a
single entity, common control signals such as clock, preset and clear can be used for all the flip-
flops of the register. Registers can be constructed from any type of flip-flop. These flip-flops in
integrated circuit registers are usually constructed internally using two separate flip-flop circuits.
We therefore see Registers as group of flip-flops working together as a coherent unit to input,
store, transform and output data and also carry out bit by bit operation. The purpose of registrar
is storage. In this course, three types of Registers; Storage (simple), Transformation and Shift
will be discussed.
These are used for storage of bits of information. A 4-bits storage register is describing in Figure
40. The figure shows a register constructed with four D-type flip-flops and a common clock
pulse-input. The clock pulse-input, CP, enables all flip-flops so that the information presently
available at the four inputs can be transferred into the four-bit register.
59
Figure 50: 4-bits storage register
This is used for transforming data from one type to another. Eg. Transforming binary codes to its
one’s complement. It can also be used for storage and parallel Input. Figure 41 describe
60
4.7.3 Shift Register
Registers are capable of shifting their binary information in one or both directions. The logical
configuration of a Shift - Register consists of a series of flip-flops, with the output of one flip-
flop connected to the input of the next flip-flop. Used in carrying out shift operation which
involves multiplication, division and partial addition. Figure 42 describe a shift register.
Registers are capable of shifting their binary information in one or both directions. The logical
configuration of a Shift - Register consists of a series of flip-flops, with the output of one flip-
The most general Shift - Registers are often referred to as Bidirectional Shift Register with
parallel load. A common clock is connected to each register in series to synchronize all
operations. A serial input line is associated with the left-most register, and a serial output line is
associated with the right-most register. A control state is connected which leaves the information
in the register unchanged even though clock pulses are applied continuously.
4.8 Counters
These are used for counting signals. They are widely used in digital electronic like watches,
computers etc. types of counters include ring, backward, forward and so on. For an n-bits
61
counter, the counting bits ranges from 0 to 2n-1. Counters are made up of flip-flops. Each flip-
flop holds one bit. If we have say 2-bits counter, the range will be 0 to 3. Table 5 describe an
Flip-flops Counters
0 0 0
0 1 1
1 0 2
1 1 3
62
CHAPTER 5
Computer Architecture is concerned with the way hardware components are connected together
to form a computer system. It acts as the interface between hardware and software. Computer
system architecture is considered first. Computer Architecture deals with high-level design
issues. Architecture involves Logic (Instruction sets, Addressing modes, Data types, Cache
optimization).
These instructions are stored in computer memory. These instructions are executed to process
data which are already loaded in the computer memory through some input devices. After
processing the data, the result is either stored in the memory for further reference, or it is sent to
the outside world through some output port. To perform the execution of an instruction, in
addition to the arithmetic logic unit, and control unit, the processor contains a number of
registers used for temporary storage of data and some special function registers.
The special function registers include program counters (PC), instruction registers (IR),
memory address registers (MAR) and memory and memory data registers (MDR). The Program
counter is one of the most critical registers in CPU. The Program counter monitors the execution
of instructions. It keeps track on which instruction is being executed and what the next
instruction will be. The instruction register IR is used to hold the instruction that is currently
63
being executed. The contents of IR are available to the control unit, which generate the timing
signals that control, the various processing elements involved in executing the instruction.
The two registers MAR and MDR are used to handle the data transfer between the main memory
and the processor. The MAR holds the address of the main memory to or from which data is to
be transferred. The MDR contains the data to be written into or read from the addressed word of
the main memory. Whenever the processor is asked to communicate with devices, we say that
the processor is servicing the devices. The processor can service these devices in one of the two
ways. One way is to use the polling routine, and the other way is to use an interrupt.
Polling enables the processor software to check each of the input and output devices
frequently. During this check, the processor tests to see if any devices need servicing or not.
Interrupt method provides an external asynchronous input that informs the processor that it
should complete whatever instruction that is currently being executed and fetch a new routine
In Computer Architecture, the General System Architecture is divided into two major
classification units; Store Program Control Concept and Flynn's Classification of Computers.
Figure 45 describe the store program control while Figure 46 describe the Flynn's Classification
of Computers. The term Stored Program Control Concept refers to the storage of instructions in
The idea was introduced in the late 1040s by John von Neumann who proposed that a program
(Electronic Numerical Integrator and Computer) was the first computing system designed in the
64
early 1940s. It was based on Stored Program Concept in which machine use memory for
processing data.
M.J. Flynn proposed a classification for the organization of a computer system by the number of
instructions and data items that are manipulated simultaneously. The sequence of instructions
read from memory constitutes an instruction stream. The operations performed on the data in the
Registers are a type of computer memory used to quickly accept, store, and transfer data and
instructions that are being used immediately by the CPU. The registers used by the CPU are
often termed as Processor registers. A processor register may hold an instruction, a storage
65
address, or any data (such as bit sequence or individual characters). The computer needs
processor registers for manipulating data and a register for holding a memory address. The
register holding the memory location is used to calculate the address of the next instruction after
the execution of the current instruction is completed. Based on the given diagram, Figure 47, the
specific registers include program counter (PC), address register (AR), instruction register (IR),
temporary register (TR), input register (IR), output register (OR), data register (DR) and
accumulator (AC). Memories do have 4096 words capacity with 16 bits per word. The number
of the registers increases depending on the architecture of the computer or processor capabilities.
PC holds the data of the program counter; the address of the next instruction to be fetched after
the current instruction is executed. It can store any 12-bits based data. AR holds the address of
the operands, usually a 12-bits register. The Memory Address Register (MAR) contains 12 bits
which hold the address for the memory location. IR is a 16-bits register that holds the instruction
currently being executed. The instruction read from memory is placed therein. Input Register
(INPR) holds the input characters given by the user. The Output Registers (OUTR) holds the
output after processing the input data. TR is a 16-bits register that holds temporary data to be
processed. It holds the temporary data during the processing. Input and output register are 8-bits
register that holds input and output data respectively. DR holds the operand or data being
66
Figure 47: Basic computer registers
Computer instructions are a set of machine language instructions that a particular processor
understands and executes. A computer performs tasks on the basis of the instruction provided.
The Operation code (Opcode) field which specifies the operation to be performed.
The Address field which contains the location of the operand, i.e., register or memory
location.
The Mode field which specifies how the operand will be located.
register.
67
5.4 Input-Output instruction
Just like the Register-reference instruction, an Input-Output instruction does not need a reference
to memory and is recognized by the operation code 111 with a 1 in the leftmost bit of the
instruction. The remaining 12 bits are used to specify the type of the input-output operation or
test performed.
The three operation code bits in positions 12 through 14 should be equal to 111.
Otherwise, the instruction is a memory-reference type, and the bit in position 15 is taken
When the three operation code bits are equal to 111, control unit inspects the bit in
position 15. If the bit is 0, the instruction is a register-reference type. Otherwise, the
A set of instructions for moving information to and from memory and processor registers.
68
Instructions which controls the program together with instructions that check status
conditions.
Arithmetic, logic and shift instructions provide computational capabilities for processing the type
of data the user may wish to employ. A huge amount of binary information is stored in the
memory unit, but all computations are done in processor registers. Therefore, one must possess
the capability of moving information between these two units. Program control instructions such
as branch instructions are used change the sequence in which the program is executed. Input and
Output instructions act as an interface between the computer and the user. Programs and data
must be transferred into memory, and the results of computations must be transferred back to the
user.
The Control Unit is classified into two major categories: Hardwired Control and
Microprogrammed Control. The Hardwired Control organization involves the control logic to be
implemented with gates, flip-flops, decoders, and other digital circuits. The following image
A Hard-wired Control consists of two decoders, a sequence counter, and a number of logic gates
as shown in Figure 48. An instruction fetched from the memory unit is placed in the instruction
register (IR). The component of an instruction register includes; I bit, the operation code, and bits
0 through 11. The operation code in bits 12 through 14 are coded with a 3 x 8 decoder. The
outputs of the decoder are designated by the symbols D0 through D7. The operation code at bit
15 is transferred to a flip-flop designated by the symbol I. The operation codes from Bits 0
69
through 11 are applied to the control logic gates. The Sequence counter (SC) can count in binary
The Control memory address register specifies the address of the micro-instruction. The Control
memory is assumed to be a ROM, within which all control information is permanently stored.
The control register holds the microinstruction fetched from the memory. The micro-instruction
70
contains a control word that specifies one or more micro-operations for the data processor. While
the micro-operations are being executed, the next address is computed in the next address
generator circuit and then transferred into the control address register to read the next
These instructions are executed by the processor by going through a cycle for each instruction. In
In computer architecture, input-output devices act as an interface between the machine and the
user. In computer architecture, input-output devices act as an interface between the machine and
the user. Instructions and data stored in the memory must come from some input device. The
results are displayed to the user through some output device. The block diagram in Figure 50
will always have eight bits of an alphanumeric code. The information generated through the
keyboard is shifted into an input register 'INPR'. The information for the printer is stored in the
output register 'OUTR'. Registers INPR and OUTR communicate with a communication
interface serially and with the AC in parallel. The transmitter interface receives information from
the keyboard and transmits it to INPR. The receiver interface receives information from OUTR
A memory unit with 4096 words of 16 bits each Registers: AC (Accumulator), DR (Data
register), SC (Sequence Counter), INPR (Input register), and OUTR (Output register). Flip-
The Control Logic Gate for a basic computer is same as the one used in Hard wired Control
organization. The block diagram is also similar to the Control Logic Gate used in the Hard wired
Control organization. The input for the Control Logic circuit comes from the two decoders, I
flip-flop and bits 0 through 11 of IR. The other inputs to the Control Logic are AC (bits 0
through 15), DR (bits 0 through 15), and the value of the seven flip-flops.
The control of the inputs of the nine registers; the control of the read and write inputs of
memory, to set, clear, or complement the flip-flops; S2, S1, and SO to select a register for the
72
5.11 Computer Organization
Computer Organization is concerned with the structure and behavior of a computer system as
seen by the user. It deals with the components of a connection in a system. Computer
Organization tells us how exactly all the units in the system are arranged and interconnected.
architecture. Computer Organization deals with low-level design issues. Organization involves
73
CHAPTER 6
Several tools could be used to simulate logic circuits. These tools include Simulink, Falstad
Circuit Sim and LTSpice. In this course, Falstad Circuit Sim will be used for easy access, which
doesn’t require installation, but could be used directly online. This is an electronic circuit
simulator.
When the applet starts up you will see an animated schematic of a simple LRC circuit. The green
color indicates positive voltage. The gray color indicates ground. A red color indicates negative
voltage. The moving yellow dots indicate current. To turn a switch on or off, just click on it. If
you move the mouse over any component of the circuit, you will see a short description of that
component and its current state in the lower right corner of the window. To modify a
component, move the mouse over it, click the right mouse button (or control-click if you have a
The "Circuits" menu contains a lot of sample circuits for you to try. Hence, the procedure for
Figure describe the initial layout of the design and simulation of the circuits. By default, there is
an existing design, which can be deleted and design the circuit of interest. Figure 50 describe the
default simulation layout. As earlier explained, the default figure shows that the green color
indicates positive voltage. The gray color indicates ground. A red color indicates negative
74
Figure 50: Default simulation Layout
Secondly, the default design will be deleted and then any design of interest will be made.
Figure 51 describe a half adder circuit designed in place of the default design deleted.
Based on the given design, the inputs are two and are both one (1) on each gate; AND
Figure 53 presents a simulated 2x4 decorder whose inputs are 1 and 0. AND and NOT
Figure 55 describe a simulated j-k ff with two inputs and outputs respectively. This diagram is of
NAND gates, a kind of S-R flip-flop replaced with J-K. the outputs are Q and NotQ as indicated
in Figure 54. Though Figure 54 is a latch while Figure 55 a flip-flop. Figure 55 includes the
clock signal while simulating it. The negative and positive age was all showing in green colour.
77
Figure 55: J-K flip-flop
Figure 56 describe an S-R flip-flop with clock signal as simulated. It is observed that the inputs
are 0, 1 while the outputs are 1, 0. The clock diagram is in green as shown beneath it.
78
Figure 57 describe a simulated Shift register whose inputs are serial and outputs parallel. In this
case, a clock signal is introduced to activate the input signal. On the other hand, Figure 58
William Stallings, Computer Organization and Architecture, Prentice Hall, ISBN 0131856448.
Digital Design Morris Mano, PHI, 3rd Edition, 2006. 4.Taub & Schilling: Digital integrated
electronics, McGraw-Hill 5.R P Jain : Digital Electronics, 4th Edition TMH.
https://www.falstad.com/circuit/
80