0% found this document useful (0 votes)
34 views68 pages

Digital

The document explains character encoding methods used in computers, specifically EBCDIC and ASCII, with a focus on Binary Coded Decimal (BCD) for representing decimal numbers in binary form. It details how BCD converts each decimal digit into a 4-bit binary equivalent, highlighting its efficiency and application in digital displays and financial calculations. Additionally, the document provides an overview of ASCII, its character classifications, and a truth table for BCD conversions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views68 pages

Digital

The document explains character encoding methods used in computers, specifically EBCDIC and ASCII, with a focus on Binary Coded Decimal (BCD) for representing decimal numbers in binary form. It details how BCD converts each decimal digit into a 4-bit binary equivalent, highlighting its efficiency and application in digital displays and financial calculations. Additionally, the document provides an overview of ASCII, its character classifications, and a truth table for BCD conversions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 68

The character code built into the computer determines how each letter, digit or special

character ($, %, #, etc.) is represented in binary code. Fortunately, there are only two
methods in wide use: EBCDIC and ASCII. IBM's mainframes and midrange systems
use EBCDIC. ASCII is used for everything else, including PCs and Macs.

BCD or Binary Coded Decimal


Binary Coded Decimal, or BCD, is another process for converting decimal numbers
into their binary equivalents.

 It is a form of binary encoding where each digit in a decimal number is represented


in the form of bits.
 This encoding can be done in either 4-bit or 8-bit (usually 4-bit is preferred).
 It is a fast and efficient system that converts the decimal numbers into binary
numbers as compared to the existing binary system.
 These are generally used in digital displays where is the manipulation of data is quite
a task.
 Thus BCD plays an important role here because the manipulation is done treating
each digit as a separate single sub-circuit.

The BCD equivalent of a decimal number is written by replacing each decimal digit
in the integer and fractional parts with its four bit binary equivalent.the BCD code is
more precisely known as 8421 BCD code , with 8,4,2 and 1 representing the weights
of different bits in the four-bit groups, Starting from MSB and proceeding towards
LSB. This feature makes it a weighted code , which means that each bit in the four
bit group representing a given decimal digit has an assigned weight.
Many decimal values, have an infinite place-value representation in binary but have a
finite place-value in binary-coded decimal. For example, 0.2 in binary is .001100…
and in BCD is 0.0010. It avoids fractional errors and is also used in huge financial
calculations.
Consider the following truth table and focus on how these are represented.
Truth Table for Binary Coded Decimal
DECIMAL NUMBER BCD
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
In the BCD numbering system, the given decimal number is segregated into chunks
of four bits for each decimal digit within the number. Each decimal digit is converted
into its direct binary form (usually represented in 4-bits).
For example:

1. Convert (123)10 in BCD


From the truth table above,
1 -> 0001
2 -> 0010
3 -> 0011
thus, BCD becomes -> 0001 0010 0011

2. Convert (324)10 in BCD


(324)10 -> 0011 0010 0100 (BCD)
Again from the truth table above,
3 -> 0011
2 -> 0010
4 -> 0100
thus, BCD becomes -> 0011 0010 0100

This is how decimal numbers are converted to their equivalent BCDs.

 It is noticeable that the BCD is nothing more than a binary representation of each
digit of a decimal number.
 It cannot be ignored that the BCD representation of the given decimal number uses
extra bits, which makes it heavy-weighted.

ASCII Code
The ASCII stands for American Standard Code for Information Interchange. The
ASCII code is an alphanumeric code used for data communication in digital
computers. The ASCII is a 7-bit code capable of representing 2 7 or 128 number of
different characters. The ASCII code is made up of a three-bit group, which is
followed by a four-bit code.

o The ASCII Code is a 7 or 8-bit alphanumeric code.


o This code can represent 127 unique characters.
o The ASCII code starts from 00h to 7Fh. In this, the code from 00h to 1Fh is used for
control characters, and the code from 20h to 7Fh is used for graphic symbols.
o The 8-bit code holds ASCII, which supports 256 symbols where math and graphic
symbols are added.
o The range of the extended ASCII is 80h to FFh.
The ASCII characters are classified into the following groups:
Control Characters
The non-printable characters used for sending commands to the PC or printer are
known as control characters. We can set tabs, and line breaks functionality by this
code. The control characters are based on telex technology. Nowadays, it's not so
much popular in use. The character from 0 to 31 and 127 comes under control
characters.
Special Characters
All printable characters that are neither numbers nor letters come under the special
characters. These characters contain technical, punctuation, and mathematical
characters with space also. The character from 32 to 47, 58 to 64, 91 to 96, and 123 to
126 comes under this category.
Numbers Characters
This category of ASCII code contains ten Arabic numerals from 0 to 9.
Letters Characters
In this category, two groups of letters are contained, i.e., the group of uppercase letters
and the group of lowercase letters. The range from 65 to 90 and 97 to 122 comes under
this category.
ASCII Table
The values are typically represented in ASCII code tables in decimal, binary, and
hexadecimal form.
Binary Hexade Dec ASCII Description Group
cimal ima Symbol
l
0000000 0 0 NUL The null character encourage the Control Character
device to do nothing
0000001 1 1 SOH The symbol SOH(Starts of heading) Control Character
Initiates the header.
0000010 2 2 STX The symbol STX(Start of Text) Control Character
ends the header and marks the
beginning of a message.
0000011 3 3 ETX The symbol ETX(End of Text) Control Character
indicates the end of the message.
0000100 4 4 EOT The EOT(end of text) symbol Control Character
marks the end of a completes
transmission
0000101 5 5 ENQ The ENQ(Enquiry) symbol is a Control Character
request that requires a response
0000110 6 6 ACK The ACK(Acknowledge) symbol is Control Character
a positive answer to the request.
0000111 7 7 BEL The BEL(Bell) symbol triggers a Control Character
beep.
0001000 8 8 BS Lets the cursor move back one step Control Character
(Backspace)
0001001 9 9 TAB (HT) A horizontal tab that moves the Control Character
cursor within a row to the next
predefined position (Horizontal
Tab)
0001010 A 10 LF Causes the cursor to jump to the Control Character
next line (Line Feed)
0001011 B 11 VT The vertical tab lets the cursor jump Control Character
to a predefined line (Vertical Tab)
0001100 C 12 FF Requests a page break (Form Feed) Control Character
0001101 D 13 CR Moves the cursor back to the first Control Character
position of the line (Carriage
Return)
0001110 E 14 SO Switches to a special presentation Control Character
(Shift Out)
0001111 F 15 SI Switches the display back to the Control Character
normal state (Shift In)
0010000 10 16 DLE Changes the meaning of the Control Character
following characters (Data Link
Escape)
0010001 11 17 DC1 Control characters assigned Control Character
depending on the device used
(Device Control)
0010010 12 18 DC2 Control characters assigned Control Character
depending on the device used
(Device Control)
0010011 13 19 DC3 Control characters assigned Control Character
depending on the device used
(Device Control)
0010100 14 20 DC4 Control characters assigned Control Character
depending on the device used
(Device Control)
0010101 15 21 NAK The negative response to a request Control Character
(Negative Acknowledge)
0010110 16 22 SYN Synchronizes a data transfer, even Control Character
if no signals are transmitted
(Synchronous Idle)
0010111 17 23 ETB Marks the end of a transmission Control Character
block (End of Transmission Block)
0011000 18 24 CAN Makes it clear that transmission Control Character
was faulty and the data must be
discarded (Cancel)
0011001 19 25 EM Indicates the end of the storage Control Character
medium (End of Medium)
0011010 1A 26 SUB Replacement for a faulty sign Control Character
(Substitute)
0011011 1B 27 ESC Initiates an escape sequence and Control Character
thus gives the following characters
a special meaning (Escape)
0011100 1C 28 FS File separator. Control Character
0011101 1D 29 GS Group separator. Control Character
0011110 1E 30 RS Record separator. Control Character
0011111 1F 31 US Unit separator. Control Character
0100000 20 32 SP Blank space Special Character
0100001 21 33 ! Exclamation mark Special Character
0100010 22 34 Only quotes above Special Character
0100011 23 35 # Pound sign Special Character
0100100 24 36 $ Dollar sign Special Character
0100101 25 37 % Percentage sign Special Character
0100110 26 38 & Commercial and Special Character
0100111 27 39 Apostrophe Special Character
0101000 28 40 ( Left bracket Special Character
0101001 29 41 ) Right bracket Special Character
0101010 2A 42 * Asterisk Special Character
0101011 2B 43 + Plus symbol Special Character
0101100 2C 44 , Comma Special Character
0101101 2D 45 - Dash Special Character
0101110 2E 46 . Full stop Special Character
0101111 2F 47 / Forward slash Special Character
0110000 30 48 0 Numbers
0110001 31 49 1 Numbers
0110010 32 50 2 Numbers
0110011 33 51 3 Numbers
0110100 34 52 4 Numbers
0110101 35 53 5 Numbers
0110110 36 54 6 Numbers
0110111 37 55 7 Numbers
0111000 38 56 8 Numbers
0111001 39 57 9 Numbers
0111010 3A 58 : Colon Special characters
0111011 3B 59 ; Semicolon Special characters
0111100 3C 60 < Small than bracket Special characters
0111101 3D 61 = Equals sign Special characters
0111110 3E 62 > Bigger than symbol Special characters
0111111 3F 63 ? Question mark Special characters
1000000 40 64 @ At symbol Special characters
1000001 41 65 A Capital letters
1000010 42 66 B Capital letters
1000011 43 67 C Capital letters
1000100 44 68 D Capital letters
1000101 45 69 E Capital letters
1000110 46 70 F Capital letters
1000111 47 71 G Capital letters
1001000 48 72 H Capital letters
1001001 49 73 I Capital letters
1001010 4A 74 J Capital letters
1001011 4B 75 K Capital letters
1001100 4C 76 L Capital letters
1001101 4D 77 M Capital letters
1001110 4E 78 N Capital letters
1001111 4F 79 O Capital letters
1010000 50 80 P Capital letters
1010001 51 81 Q Capital letters
1010010 52 82 R Capital letters
1010011 53 83 S Capital letters
1010100 54 84 T Capital letters
1010101 55 85 U Capital letters
1010110 56 86 V Capital letters
1010111 57 87 W Capital letters
1011000 58 88 X Capital letters
1011001 59 89 Y Capital letters
1011010 5A 90 Z Capital letters
1011011 5B 91 [ Left square bracket Special character
1011100 5C 92 \ Inverse/backward slash Special character
1011101 5D 93 ] Right square bracket Special character
1011110 5E 94 ^ Circumflex Special character
1011111 5F 95 _ Underscore Special character
1100000 60 96 ` Gravis (backtick) Special character
1100001 61 97 A Lowercase letters
1100010 62 98 B Lowercase letters
1100011 63 99 C Lowercase letters
1100100 64 100 D Lowercase letters
1100101 65 101 E Lowercase letters
1100110 66 102 F Lowercase letters
1100111 67 103 G Lowercase letters
1101000 68 104 H Lowercase letters
1101001 69 105 I Lowercase letters
1101010 6A 106 J Lowercase letters
1101011 6B 107 K Lowercase letters
1101100 6C 108 L Lowercase letters
1101101 6D 109 M Lowercase letters
1101110 6E 110 N Lowercase letters
1101111 6F 111 O Lowercase letters
1110000 70 112 P Lowercase letters
1110001 71 113 Q Lowercase letters
1110010 72 114 R Lowercase letters
1110011 73 115 S Lowercase letters
1110100 74 116 T Lowercase letters
1110101 75 117 U Lowercase letters
1110110 76 118 v Lowercase letters
1110111 77 119 w Lowercase letters
1111000 78 120 x Lowercase letters
1111001 79 121 y Lowercase letters
1111010 7A 122 z Lowercase letters
1111011 7B 123 { Left curly bracket Special characters
1111100 7C 124 l Vertical line Special characters
1111101 7D 125 } Right curly brackets Special characters
1111110 7E 126 ~ Tilde Special characters
1111111 7F 127 DEL The DEL(Delete) symbol deletes a Control characters
character. This is a control
character that consists of the same
number in all positions.
Example 1: (10010101100001111011011000011010100111000011011111101001
110111011101001000000011000101100100110011)2
Step 1: In the first step, we we make the groups of 7-bits because the ASCII code is 7
bit.
1001010 1100001 1110110 1100001 1010100 1110000 1101111 1101001 1101110
1110100 1000000 0110001 0110010 0110011
Step 2: Then, we find the equivalent decimal number of the binary digits either from
the ASCII table or 64 32 16 8 4 2 1 scheme.
Binary Decimal
64 32 16 8 4 2 1 64+8+2=74
1 0 0 1010
64 32 16 8 4 2 1 64+32+1=94
1 1 0 0001
64 32 16 8 4 2 1 64+32+16+4+2=118
1 1 1 0110
64 32 16 8 4 2 1 64+32+1=97
1 1 0 0001
64 32 16 8 4 2 1 64+16+4=84
1 0 1 0100
64 32 16 8 4 2 1 64+32+16=112
1 1 1 0000
64 32 16 8 4 2 1 64+32+8+4+2+1=111
1 1 0 1111
64 32 16 8 4 2 1 64+32+8+1=105
1 1 0 1001
64 32 16 8 4 2 1 64+32+8+4+2=110
1 1 0 1110
64 32 16 8 4 2 1 64+32+16+4=116
1 1 1 0100
64 32 16 8 4 2 1 64
1 0 0 0000
64 32 16 8 4 2 1 32+16+1=49
0 1 1 0001
64 32 16 8 4 2 1 32+16+2=50
0 1 1 0010
64 32 16 8 4 2 1 32+16+2+1=51
0 1 1 0011
Step 3: Last, we find the equivalent symbol of the decimal number from the ASCII
table.
Decimal Symbol
74 J
94 a
118 v
97 a
84 T
112 p
111 o
105 i
110 n
116 t
64 @
49 1
50 2
51 3
EBCDIC code
The EBCDIC (Extended Binary Coded Decimal Interchange Code), pronounced ‘eb-
si-dik’, is another widely used alphanumeric code, mainly popular with larger systems.
The code was created by IBM to extend the binary coded decimal that existed at that
time. All IBM mainframe computer peripherals and operating systems use EBCDIC
code, and their operating systems provide ASCII and Unicode modes to allow
translation between different encodings. It may be mentioned here that EBCDIC offers
no technical advantage over the ASCII code and its variant ISO-8859 or Unicode. Its
importance in the earlier days lay in the fact that it made it relatively easier to enter
data into larger machines with punch cards. Since, punch cards are not used on
mainframes any more, the code is used in contemporary mainframe machines solely
for backwards compatibility.

EBCDIC code.
Decimal Hex Binary Code Code description
0 00 0000 0000 NUL Null character
1 01 0000 0001 SOH Start of header
2 02 0000 0010 STX Start of text
3 03 0000 0011 ETX End of text
4 04 0000 0100 PF Punch off
5 05 0000 0101 HT Horizontal tab
6 06 0000 0110 LC Lower case
7 07 0000 0111 DEL Delete
8 08 0000 1000
9 09 0000 1001
10 0A 0000 1010 SMM Start of manual message
11 0B 0000 1011 VT Vertical tab
12 0C 0000 1100 FF Form feed
13 0D 0000 1101 CR Carriage return
14 0E 0000 1110 SO Shift out
15 0F 0000 1111 SI Shift in
16 10 0001 0000 DLE Data link escape
17 11 0001 0001 DC1 Device control 1
18 12 0001 0010 DC2 Device control 2
19 13 0001 0011 TM Tape mark
20 14 0001 0100 RES Restore
21 15 0001 0101 NL New line
22 16 0001 0110 BS Backspace
23 17 0001 0111 IL Idle
24 18 0001 1000 CAN Cancel
25 19 0001 1001 EM End of medium
26 1A 0001 1010 CC Cursor control
27 1B 0001 1011 CU1 Customer use 1
28 1C 0001 1100 IFS Interchange file separator
29 1D 0001 1101 IGS Interchange group separator
30 1E 0001 1110 IRS Interchange record separator
31 1F 0001 1111 IUS Interchange unit separator
32 20 0010 0000 DS Digit select
33 21 0010 0001 SOS Start of significance
34 22 0010 0010 FS Field separator
35 23 0010 0011
36 24 0010 0100 BYP Bypass
37 25 0010 0101 LF Line feed
38 26 0010 0110 ETB End of transmission block
39 27 0010 0111 ESC Escape
40 28 0010 1000
41 29 0010 1001
42 2A 0010 1010 SM Set mode
43 2B 0010 1011 CU2 Customer use 2
44 2C 0010 1100
45 2D 0010 1101 ENQ Enquiry
46 2E 0010 1110 ACK Acknowledge
47 2F 0010 1111 BEL Bell
48 30 0011 0000

The first four-bit group, called the ‘zone’, represents the category of the character,
while the
second group, called the ‘digit’, identifies the specific character.
Signed Binary Number System
Signed binary is very similar to binary, only that it includes negative numbers as well.
The sign of the binary number is determined by the leading (furthest left) digit. If it is
a 1, then it is negative, and the magnitude, or absolute value, can be found by flipping
all 1’s to 0’s and 0’s to 1’s. If it is a leading 0, then treat it like a normal binary
number. This can be seen in the table below, how unsigned and signed binary numbers
differ in their decimal interpretation.

The main benefit of signed binary is the ability


to have negative numbers. While using the same digits, both signed and unsigned
binary represent the same amount of numbers, or area on the number line as seen in
the picture below. The unsigned binary is able to show larger numbers though, due to
the extra digit.

In mathematics, positive numbers (including zero) are represented as unsigned


numbers. That is we do not put the +ve sign in front of them to show that they are
positive numbers. But when dealing with negative numbers we do use a -ve sign in
front of the number to show that the number is negative in value and different from a
positive unsigned value, and the same is true with signed binary numbers.
However, in digital circuits there is no provision made to put a plus or even a minus
sign to a number, since digital systems operate with binary numbers that are
represented in terms of “0’s” and “1’s”. When used together in microelectronics, these
“1’s” and “0’s”, called a bit (being a contraction of BInary digiT), fall into several
range sizes of numbers which are referred to by common names, such as a byte or
a word.
We have also seen previously that an 8-bit binary number (a byte) can have a value
ranging from 0 (000000002) to 255 (111111112), that is 28 = 256 different
combinations of bits forming a single 8-bit byte. So for example an unsigned binary
number such as: 010011012 = 64 + 8 + 4 + 1 = 7710 in decimal. But Digital Systems
and computers must also be able to use and to manipulate negative numbers as well as
positive numbers.
Mathematical numbers are generally made up of a sign and a value (magnitude) in
which the sign indicates whether the number is positive, ( + ) or negative, ( – ) with the
value indicating the size of the number, for example 23, +156 or -274. Presenting
numbers is this fashion is called “sign-magnitude” representation since the left most
digit can be used to indicate the sign and the remaining digits the magnitude or value
of the number.
Sign-magnitude notation is the simplest and one of the most common methods of
representing positive and negative numbers either side of zero, (0). Thus negative
numbers are obtained simply by changing the sign of the corresponding positive
number as each positive or unsigned number will have a signed opposite, for example,
+2 and -2, +10 and -10, etc.
But how do we represent signed binary numbers if all we have is a bunch of one’s and
zero’s. We know that binary digits, or bits only have two values, either a “1” or a “0”
and conveniently for us, a sign also has only two values, being a “+” or a “–“.
Then we can use a single bit to identify the sign of a signed binary number as being
positive or negative in value. So to represent a positive binary number (+n) and a
negative (-n) binary number, we can use them with the addition of a sign.
For signed binary numbers the most significant bit (MSB) is used as the sign bit. If the
sign bit is “0”, this means the number is positive in value. If the sign bit is “1”, then
the number is negative in value. The remaining bits in the number are used to
represent the magnitude of the binary number in the usual unsigned binary number
format way.
Then we can see that the Sign-and-Magnitude (SM) notation stores positive and
negative values by dividing the “n” total bits into two parts: 1 bit for the sign and n–1
bits for the value which is a pure binary number. For example, the decimal number 53
can be expressed as an 8-bit signed binary number as follows.
Positive Signed Binary Numbers

Negative Signed Binary Numbers

The disadvantage here is that whereas before we had a full range n-bit unsigned binary
number, we now have an n-1 bit signed binary number giving a reduced range of digits
from:
-2(n-1) to +2(n-1)
So for example: if we have 4 bits to represent a signed binary number, (1-bit for
the Sign bit and 3-bits for the Magnitude bits), then the actual range of numbers we
can represent in sign-magnitude notation would be:
-2(4-1) – 1 to +2(4-1) – 1
-2(3) – 1 to +2(3) – 1
-7 to +7
Whereas before, the range of an unsigned 4-bit binary number would have been
from 0 to 15, or 0 to F in hexadecimal, we now have a reduced range of -7 to +7. Thus
an unsigned binary number does not have a single sign-bit, and therefore can have a
larger binary range as the most significant bit (MSB) is just an extra bit or digit rather
than a used sign bit.
Another disadvantage here of the sign-magnitude form is that we can have a positive
result for zero, +0 or 00002, and a negative result for zero, -0 or 10002. Both are valid
but which one is correct.
Signed Binary Numbers Example No1
Convert the following decimal values into signed binary numbers using the sign-
magnitude format:
-1510 as a 6-bit number ⇒ 1011112

+2310 as a 6-bit number ⇒ 0101112

-5610 as a 8-bit number ⇒ 101110002

+8510 as a 8-bit number ⇒ 010101012

-12710 as a 8-bit number ⇒ 111111112


Note that for a 4-bit, 6-bit, 8-bit, 16-bit or 32-bit signed binary number all the bits
MUST have a value, therefore “0’s” are used to fill the spaces between the leftmost
sign bit and the first or highest value “1”.
The sign-magnitude representation of a binary number is a simple method to use and
understand for representing signed binary numbers, as we use this system all the time
with normal decimal (base 10) numbers in mathematics. Adding a “1” to the front of it
if the binary number is negative and a “0” if it is positive.
However, using this sign-magnitude method can result in the possibility of two
different bit patterns having the same binary value. For example, +0 and -0 would
be 0000 and 1000 respectively as a signed 4-bit binary number. So we can see that
using this method there can be two representations for zero, a positive zero ( 00002 )
and also a negative zero ( 10002 ) which can cause big complications for computers
and digital systems.
One’s Complement of a Signed Binary Number
One’s Complement or 1’s Complement as it is also termed, is another method which
we can use to represent negative binary numbers in a signed binary number system. In
one’s complement, positive numbers (also known as non-complements) remain
unchanged as before with the sign-magnitude numbers.
Negative numbers however, are represented by taking the one’s complement
(inversion, negation) of the unsigned positive number. Since positive numbers always
start with a “0”, the complement will always start with a “1” to indicate a negative
number.
The one’s complement of a negative binary number is the complement of its positive
counterpart, so to take the one’s complement of a binary number, all we need to do is
change each bit in turn. Thus the one’s complement of “1” is “0” and vice versa, then
the one’s complement of 100101002 is simply 011010112 as all the 1’s are changed to
0’s and the 0’s to 1’s.
The easiest way to find the one’s complement of a signed binary number when
building digital arithmetic or logic decoder circuits is to use Inverters. The inverter is
naturally a complement generator and can be used in parallel to find the 1’s
complement of any binary number as shown.
1’s Complement Using Inverters

Then we can see that it is very easy to find the one’s complement of a binary
number N as all we need do is simply change the 1’s to 0’s and the 0’s to 1’s to give
us a -N equivalent. Also just like the previous sign-magnitude representation, one’s
complement can also have n-bit notation to represent numbers in the range from: -2(n-
1)
and +2(n-1) – 1. For example, a 4-bit representation in the one’s complement format
can be used to represent decimal numbers in the range from -7 to +7 with two
representations of zero: 0000 (+0) and 1111 (-0) the same as before.
Addition and Subtraction Using One’s Complement
One of the main advantages of One’s Complement is in the addition and subtraction
of two binary numbers. In mathematics, subtraction can be implemented in a variety of
different ways as A – B, is the same as saying A + (-B) or -B + A etc. Therefore, the
complication of subtracting two binary numbers can be performed by simply using
addition.
We saw in the Binary Adder tutorial that binary addition follows the same rules as for
the normal addition except that in binary there are only two bits (digits) and the largest
digit is a “1”, (just as “9” is the largest decimal digit) thus the possible combinations
for binary addition are as follows:
0 0 1 1

+0 +1 +0 +1

0 1 1 1← 0 ( 0 plus a carry 1 )
When the two numbers to be added are both positive, the sum A + B, they can be
added together by means of the direct sum (including the number and bit sign),
because when single bits are added together, “0 + 0”, “0 + 1”, or “1 + 0” results in a
sum of “0” or “1”. This is because when the two bits we want to be added together are
odd (“0” + “1” or “1 + 0”), the result is “1”. Likewise when the two bits to be added
together are even (“0 + 0” or “1 + 1”) the result is “0” until you get to “1 + 1” then the
sum is equal to “0” plus a carry “1”. Let’s look at a simple example.
Subtraction of Two Binary Numbers
An 8-bit digital system is required to subtract the following two numbers 115 and 27
from each other using one’s complement. So in decimal this would be: 115 – 27 = 88.
First we need to convert the two decimal numbers into binary and make sure that each
number has the same number of bits by adding leading zero’s to produce an 8-bit
number (byte). Therefore:
11510 in binary is: 0 1 1 1 0 0 1 1 2
2710 in binary is: 0 0 0 1 1 0 1 1 2
Now we need to find the complement of the second binary number, (00011011) while
leaving the first number (01110011) unchanged. So by changing all the 1’s to 0’s and
0’s to 1’s, the one’s complement of 00011011 is therefore equal to 11100100.
Adding the first number and the complement of the second number gives:
01110011

+ 11100100

Overflow → 1 01010111

Since the digital system is to work with 8-bits, only the first eight digits are used to
provide the answer to the sum, and we simply ignore the last bit (bit 9). This bit is call
an “overflow” bit. Overflow occurs when the sum of the most significant (left-most)
column produces a carry forward. This overflow or carry bit can be ignored
completely or passed to the next digital section for use in its calculations. Overflow
indicates that the answer is positive. If there is no overflow then the answer is
negative.
The 8-bit result from above is: 01010111 (the overflow “1” cancels out) and to convert
it back from a one’s complement answer to the real answer we now have to add “1” to
the one’s complement result, therefore:
01010111

+ 1

01011000
So the result of subtracting 27 ( 000110112 ) from 115 ( 011100112 ) using 1’s
complement in binary gives the answer of: 010110002 or (64 + 16 + 8) = 8810 in
decimal.
Then we can see that signed or unsigned binary numbers can be subtracted from each
other using One’s Complement and the process of addition. Binary adders such as the
TTL 74LS83 or 74LS283 can be used to add or subtract two 4-bit signed binary
numbers or cascaded together to produce 8-bit adders complete with carry-out.
Two’s Complement
Two’s Complement or 2’s Complement as it is also termed, is another method like
the previous sign-magnitude and one’s complement form, which we can use to
represent negative binary numbers in a signed binary number system. In two’s
complement, the positive numbers are exactly the same as before for unsigned binary
numbers. A negative number, however, is represented by a binary number, which
when added to its corresponding positive equivalent results in zero.
In two’s complement form, a negative number is the 2’s complement of its positive
number with the subtraction of two numbers being A – B = A + ( 2’s complement of B
) using much the same process as before as basically, two’s complement is one’s
complement + 1.
The main advantage of two’s complement over the previous one’s complement is that
there is no double-zero problem plus it is a lot easier to generate the two’s complement
of a signed binary number. Therefore, arithmetic operations are relatively easier to
perform when the numbers are represented in the two’s complement format.
Let’s look at the subtraction of our two 8-bit numbers 115 and 27 from above using
two’s complement, and we remember from above that the binary equivalents are:
11510 in binary is: 0 1 1 1 0 0 1 1 2
2710 in binary is: 0 0 0 1 1 0 1 1 2
Our numbers are 8-bits long, then there are 28 digits available to represent our values
and in binary this equals: 1000000002 or 25610. Then the two’s complement of
2710 will be:
(28)2 – 00011011 = 100000000 – 00011011 = 111001012
The complementation of the second negative number means that the subtraction
becomes a much easier addition of the two numbers so therefore the sum
is: 115 + ( 2’s complement of 27 ) which is:
01110011 + 11100101 = 1 010110002
As previously, the 9th overflow bit is disregarded as we are only interested in the first
8-bits, so the result is: 010110002 or (64 + 16 + 8) = 8810 in decimal the same as
before.
4-bit Signed Binary Number Comparison
Decimal Signed Signed One’s Signed Two’s
Magnitude Complement Complement
+7 0111 0111 0111
+6 0110 0110 0110
+5 0101 0101 0101
+4 0100 0100 0100
+3 0011 0011 0011
+2 0010 0010 0010
+1 0001 0001 0001
+0 0000 0000 0000
-0 1000 1111 –
-1 1001 1110 1111
-2 1010 1101 1110
-3 1011 1100 1101
-4 1100 1011 1100
-5 1101 1010 1011
-6 1110 1001 1010
-7 1111 1000 1001
Signed-complement forms of binary numbers can use either 1’s complement or 2’s
complement. The 1’s complement and the 2’s complement of a binary number are
important because they permit the representation of negative numbers.
The method of 2’s complement arithmetic is commonly used in computers to handle
negative numbers the only disadvantage is that if we want to represent negative binary
numbers in the signed binary number format, we must give up some of the range of
the positive number we had before.
Cyclic Code
Cyclic Code is known to be a subclass of linear block codes where cyclic shift in the
bits of the codeword results in another codeword. It is quite important as it offers easy
implementation and thus finds applications in various systems.
Cyclic codes are widely used in satellite communication as the information sent
digitally is encoded and decoded using cyclic coding. These are error-correcting codes
where the actual information is sent over the channel by combining with the parity
bits.
Introduction
Cyclic codes are known to be a crucial subcategory of linear coding technique because
these offers efficient encoding and decoding schemes using a shift register. These are
used in error correction as they can check for double or burst errors. Various other
important codes like, Reed Solomon, Golay, Hamming, BCH, etc. can be represented
using cyclic codes.
Basically, a shift register and a modulo-2 adder are the two crucial elements
considered as building blocks of cyclic encoding. Using a shift register, encoding can
be efficiently performed. The fundamental elements of shift registers are flip flops
(that acts as a storage unit) and input-output. While the other i.e., a binary adder has
two inputs and one output.

Properties of Cyclic Code


We have mentioned at the beginning itself that cyclic codes fall under the category of
linear block codes. Let us now understand on following which properties a code is said
to be of cyclic nature.
Property 1: Property of Linearity
According to this property, a linear combination of two codewords must be another
codeword.
Suppose we have two codewords Ci and Cj. So, on adding

where this Cp must also be a codeword.


For example, suppose we have given 3 codewords (110, 101, 011).
So, according to linearity property, the addition of any of the two given codewords
must produce the third codeword. Let’s check

Hence, the given code is linear.


Property 2: Property of Cyclic Shifting
According to this property, after a right or left shift in the bits of codewords the
resultant code generated must be another codeword.
Suppose, C is a codeword given as:

Then after cyclic shifts


Here we have performed the right cyclic shift that has produced these codewords.
For example, consider again those 3 codewords (110, 101, 011) which we considered
for linearity property.
So, according to cyclic shifting property, an either right or left shift in the bits of a
codeword must generate another codeword.
110: shifting the bits towards the right will provide 011.
101: a right shift in the bits of this codeword will give 110.
011: right shifting of these bits will provide 101.
Hence, it is clear that shifting the bits of the codewords gives rise to another codeword
thus cyclic shifting property is verified.
Codes that follow both linearity, as well as cyclic shifting, are called cyclic codes.
It is to be noted here that if a codeword has all 0’s then it is called a valid codeword.
But at the same time, 0 is regarded as a necessary condition and not a sufficient
condition.

Example of Cyclic Code


We have discussed in linear block codes as well that a linear codeword is generally
given as c(n,k). Here n represents the total bits in the codeword and k denotes the
message bits. Thus, the parity bits are (n-k). Suppose we are given a code (7,4) then on
comparing with general format the codeword will have 7 bits and the actual message
bits are 4 while rest 3 are parity bits.
A cyclic codeword is given as:

Then the codeword polynomial will be represented as:

For a given codeword C = [1011], the codeword polynomial will be given as:
C(X) = 1*X0 + 0*X1 + 1*X2 + 1*X3
C(X) = X3 + X2 + 1
In a similar way, for any message codeword m, the message polynomial

And generator polynomial

Codewords are classified as systematic and non-systematic codewords.


A systematic codeword is one in which the parity bits and message bits are present in
separated forms.
C = [parity bit, message bits]
But a non-systematic codeword is the one in which the message and parity bits exist in
intermixed format and cannot be separated just by noticing the initial and final bits.
Let us now understand the coding and decoding of codewords using cyclic code.
Encoding
Non-Systematic Cyclic Encoding: Consider the message signal given as:
m = [1110]
Thus,
M(X) = 1*X0 + 1*X1 + 1*X2 + 0*X3
M(X) = X2 + X + 1
with generator polynomial G(X) = X3 + X + 1
For non-systematic code, the codeword is given as:

C(X) = (X2 + X + 1) (X3 + X + 1)


C(X) = X5 + X3 + X2 + X4 + X2 + X + X3 + X + 1
Here modulo 2 addition will be performed and in modulo 2 addition, the sum of 2
similar bits results in 0.
C(X) = X5 + X3 + X2 + X4 + X2 + X + X3 + X + 1
Thus, X3, X2 and X will get cancelled. So,
C(X) = 1 + X4 + X5
Hence, from the above codeword polynomial, the codeword will be:
C = [1000110]
From the codeword bits, we can clearly interpret that the encoded codeword contains
the message and parity bits in an intermixed pattern. Thus, is a non-systematic
codeword and direct determination of message bits is not possible.
Systematic Cyclic Encoding: Consider another message signal:
m = [1011]
So, message polynomial will be:
M(X) = 1 + X2 + X3
While the generator polynomial G(X) = X3 + X + 1
The equation for determining 7 bits codeword for systematic code is given as:

: P(X) represents the parity polynomial and is given by:

So, to construct the systematic codeword first we have to determined P(X).


Since n= 7 and k = 4

Therefore,
Hence, the obtained value of P(X) = 1
Now, substituting the values in codeword polynomial equation,

C(X) = X3 (X3 + X2 + 1) + 1
C(X) = 1 + X3 + X5 + X6
Hence, the codeword for the above code polynomial will be:
C = [1001011]
So, here the first 3 bits are parity bits and the last four bits are message bits. And we
can cross-check that we have considered [1011] as the message bits and parity
polynomial remainder was 1 i.e., code [100].
Hence, in this way encoding of non-systematic and systematic codewords is
performed.

Decoding
To understand how the detection of cyclic code takes place. Consider R(X) as received
polynomial, C(X) as the encoded polynomial, G(X) to be generator polynomial, and
E(X) as error polynomial.
Syndrome or error involved during transmission is given as:

Since R(X) is a combination of encoded polynomial and error polynomial thus above
equation can be written as:

Here, if the obtained remainder is 0 then there will be no error and if the obtained
remainder is not 0 then there will be an error that needs to be corrected.

This is so because actually coded bit does not contain error thus syndrome is not
required to be calculated.
Now, let us check for error polynomial. So, consider the error table given below where
we have assumed a single error in each of the bits of the code:
So, now using the syndrome for error polynomial formula:

We will determine, the erroneous bit. Consider the generator polynomial G(X) = X 3 +
X + 1 and dividing each error polynomial with it.

Now, tabulating it,

Consider an example where transmitted codeword is [1110010] received code, r(n,k) =


[1010010]. Hence,
R(X) = 1 + X2 + X5
Now, let us check for the syndrome, by dividing the received polynomial R(X) by the
generator polynomial G(X).

We will get,
S=X≠0
This means the received codeword contains an error. From the tabular representation,
it is clear that X has an error code [0100000]. Thus, this represents an error in the
second bit of the received code.
Hence, by using the same approach we can perform encoding and decoding using
cyclic code.

What are Error-Detecting Codes?


Error-detecting codes are a sequence of numbers generated by specific
procedures for detecting errors in data that has been transmitted over computer
networks.
When bits are transmitted over the computer network, they are subject to get corrupted
due to interference and network problems. The corrupted bits leads to spurious data
being received by the receiver and are called errors.
Error – detecting codes ensures messages to be encoded before they are sent over
noisy channels. The encoding is done in a manner so that the decoder at the receiving
end can detect whether there are errors in the incoming signal with high probability of
success.
Features of Error Detecting Codes
 Error detecting codes are adopted when backward error correction techniques are used
for reliable data transmission. In this method, the receiver sends a feedback message to
the sender to inform whether an error-free message has been received or not. If there
are errors, then the sender retransmits the message.
 Error-detecting codes are usually block codes, where the message is divided into
fixed-sized blocks of bits, to which redundant bits are added for error detection.
 Error detection involves checking whether any error has occurred or not. The number
of error bits and the type of error does not matter.
Error Detection Techniques
There are three main techniques for detecting errors

Parity Check
Parity check is done by adding an extra bit, called parity bit to the data to make
number of 1s either even in case of even parity, or odd in case of odd parity.
While creating a frame, the sender counts the number of 1s in it and adds the parity bit
in following way
 In case of even parity: If number of 1s is even then parity bit value is 0. If number of
1s is odd then parity bit value is 1.
 In case of odd parity: If number of 1s is odd then parity bit value is 0. If number of 1s
is even then parity bit value is 1.
On receiving a frame, the receiver counts the number of 1s in it. In case of even parity
check, if the count of 1s is even, the frame is accepted, otherwise it is rejected. Similar
rule is adopted for odd parity check.
Parity check is suitable for single bit error detection only.
Checksum
In this error detection scheme, the following procedure is applied
 Data is divided into fixed sized frames or segments.
 The sender adds the segments using 1’s complement arithmetic to get the sum. It then
complements the sum to get the checksum and sends it along with the data frames.
 The receiver adds the incoming segments along with the checksum using 1’s
complement arithmetic to get the sum and then complements it.
 If the result is zero, the received frames are accepted; otherwise they are discarded.
Cyclic Redundancy Check (CRC)
Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent
by a predetermined divisor agreed upon by the communicating system. The divisor is
generated using polynomials.
 Here, the sender performs binary division of the data segment by the divisor. It then
appends the remainder called CRC bits to the end of data segment. This makes the
resulting data unit exactly divisible by the divisor.
 The receiver divides the incoming data unit by the divisor. If there is no remainder, the
data unit is assumed to be correct and is accepted. Otherwise, it is understood that the
data is corrupted and is therefore rejected.

Logic Gates
Logic gates play an important role in circuit design and digital systems. It is a building
block of a digital system and an electronic circuit that always have only one output.
These gates can have one input or more than one input, but most of the gates have two
inputs. On the basis of the relationship between the input and the output, these gates
are named as AND gate, OR gate, NOT gate, etc.
There are different types of gates which are as follows:
AND Gate
This gate works in the same way as the logical operator "and". The AND gate is a
circuit that performs the AND operation of the inputs. This gate has a minimum of 2
input values and an output value.
Y=A AND B AND C AND D……N
Y=A.B.C.D……N
Y=ABCD……N
Logic Design

Truth Table

OR Gate
This gate works in the same way as the logical operator "or". The OR gate is a circuit
which performs the OR operation of the inputs. This gate also has a minimum of 2
input values and an output value.
Y=A OR B OR C OR D……N
Y=A+B+C+D……N
Logic Design

Truth Table

NOT Gate
The NOT gate is also called an inverter. This gate gives the inverse value of the input
value as a result. This gate has only one input and one output value.
Y=NOT A
Y=A'
Logic Design
Truth Table

NAND Gate
The NAND gate is the combination of AND gate and NOT gate. This gate gives the
same result as a NOT-AND operation. This gate can have two or more than two input
values and only one output value.
Y=A NOT AND B NOT AND C NOT AND D……N
Y=A NAND B NAND C NAND D……N
Logic Design

Truth Table

NOR Gate
The NOR gate is the combination of an OR gate and NOT gate. This gate gives the
same result as the NOT-OR operation. This gate can have two or more than two input
values and only one output value.
Y=A NOT OR B NOT OR C NOT OR D……N
Y=A NOR B NOR C NOR D……N
Logic Design

Truth Table
XOR Gate
The XOR gate is also known as the Ex-OR gate. The XOR gate is used in half and full
adder and subtractor. The exclusive-OR gate is sometimes called as EX-OR and X-OR
gate. This gate can have two or more than two input values and only one output value.
Y=A XOR B XOR C XOR D……N
Y=A⨁B⨁C⨁D……N
Y=AB'+A'B
Logic Design

Truth Table

XNOR Gate
The XNOR gate is also known as the Ex-NOR gate. The XNOR gate is used in half
and full adder and subtractor. The exclusive-NOR gate is sometimes called as EX-
NOR and X-NOR gate. This gate can have two or more than two input values and only
one output value.
Y=A XNOR B XNOR C XNOR D……N
Y=A⊖B⊖C⊖D……N
Y=A'B'+AB
Logic Design

Truth Table

Boolean Algebra
Boolean Algebra is used to analyze and simplify the digital (logic) circuits. It uses
only the binary numbers i.e. 0 and 1. It is also called as Binary Algebra or logical
Algebra. Boolean algebra was invented by George Boole in 1854.
Rule in Boolean Algebra
Following are the important rules used in Boolean algebra.
 Variable used can have only two values. Binary 1 for HIGH and Binary 0 for LOW.
 Complement of a variable is represented by an overbar (-). Thus, complement of
variable B is represented as . Thus if B = 0 then = 1 and B = 1 then = 0.
 ORing of the variables is represented by a plus (+) sign between them. For example
ORing of A, B, C is represented as A + B + C.
 Logical ANDing of the two or more variable is represented by writing a dot between
them such as A.B.C. Sometime the dot may be omitted like ABC.

Boolean Laws
There are six types of Boolean Laws.
Commutative law
Any binary operation which satisfies the following expression is referred to as
commutative operation.

Commutative law states that changing the sequence of the variables does not have any
effect on the output of a logic circuit.
Associative law
This law states that the order in which the logic operations are performed is irrelevant
as their effect is the same.

Distributive law
Distributive law states the following condition.

AND law
These laws use the AND operation. Therefore they are called as AND laws.

OR law
These laws use the OR operation. Therefore they are called as OR laws.

INVERSION law
This law uses the NOT operation. The inversion law states that double inversion of a
variable results in the original variable itself.

Important Boolean Theorems


Following are few important boolean Theorems.
Boolean function/theorems Description
Boolean Functions Boolean Functions and Expressions, K-Map and
NAND Gates realization
De Morgan's Theorems De Morgan's Theorem 1 and Theorem 2

De Morgan's Theorems
De Morgan has suggested two theorems which are extremely useful in Boolean
Algebra. The two theorems are discussed below.
Theorem 1

 The left hand side (LHS) of this theorem represents a NAND gate with inputs A and
B, whereas the right hand side (RHS) of the theorem represents an OR gate with
inverted inputs.
 This OR gate is called as Bubbled OR.

Table showing verification of the De Morgan's first theorem −

Theorem 2

 The LHS of this theorem represents a NOR gate with inputs A and B, whereas the
RHS represents an AND gate with inverted inputs.
 This AND gate is called as Bubbled AND.
Table showing verification of the De Morgan's second theorem −

Canonical and Standard Form


Canonical Form – In Boolean algebra,Boolean function can be expressed as
Canonical Disjunctive Normal Form known as minterm and some are expressed as
Canonical Conjunctive Normal Form known as maxterm .
In Minterm, we look for the functions where the output results in “1” while in
Maxterm we look for function where the output results in “0”.
We perform Sum of minterm also known as Sum of products (SOP) .
We perform Product of Maxterm also known as Product of sum (POS).
Boolean functions expressed as a sum of minterms or product of maxterms are said to
be in canonical form.
Standard Form – A Boolean variable can be expressed in either true form or
complemented form. In standard form Boolean function will contain all the variables
in either true form or complemented form while in canonical number of variables
depends on the output of SOP or POS.
A Boolean function can be expressed algebraically from a given truth table by forming
a:
 minterm for each combination of the variables that produces a 1 in the function and
then taking the OR of all those terms.
 maxterm for each combination of the variables that produces a 0 in the function and
then taking the AND of all those terms.
Truth table representing minterm and maxterm –

From the above table it is clear that minterm is expressed in product format and
maxterm is expressed in sum format.
Sum of minterms –
The minterms whose sum defines the Boolean function are those which give the 1’s of
the function in a truth table. Since the function can be either 1 or 0 for each minterm,
and since there are 2^n minterms, one can calculate all the functions that can be
formed with n variables to be (2^(2^n)). It is sometimes convenient to express a
Boolean function in its sum of minterm form.

 Example – Express the Boolean function F = A + B’C as standard sum of minterms.


 Solution –
A = A(B + B’) = AB + AB’
This function is still missing one variable, so
A = AB(C + C’) + AB'(C + C’) = ABC + ABC’+ AB’C + AB’C’
The second term B’C is missing one variable; hence,
B’C = B’C(A + A’) = AB’C + A’B’C
Combining all terms, we have
F = A + B’C = ABC + ABC’ + AB’C + AB’C’ + AB’C + A’B’C
But AB’C appears twice, and
according to theorem 1 (x + x = x), it is possible to remove one of those occurrences.
Rearranging the minterms in ascending order, we finally obtain
F = A’B’C + AB’C’ + AB’C + ABC’ + ABC
= m1 + m4 + m5 + m6 + m7
SOP is represented as Sigma(1, 4, 5, 6, 7)
 Example – Express the Boolean function F = xy + x’z as a product of maxterms
 Solution –
F = xy + x’z = (xy + x’)(xy + z) = (x + x’)(y + x’)(x + z)(y + z) = (x’ + y)(x + z)(y +
z)
x’ + y = x’ + y + zz’ = (x’+ y + z)(x’ + y + z’)
x + z = x + z + yy’ = (x + y + z)(x + y’ + z)
y + z = y + z + xx’ = (x + y + z)(x’ + y + z)
F = (x + y + z)(x + y’ + z)(x’ + y + z)(x’ + y + z’)
= M0*M2*M4*M5
POS is represented as Pi(0, 2, 4, 5)
 Example –
F(A, B, C) = Sigma(1, 4, 5, 6, 7)
F'(A, B, C) = Sigma(0, 2, 3) = m0 + m2 + m3
Now, if we take the complement of F’ by DeMorgan’s theorem, we obtain F in a
different form:
F = (m0 + m2 + m3)’
= m0’m2’m3′
= M0*M2*M3
= PI(0, 2, 3)

 Example – Convert Boolean expression in standard form F=y’+xz’+xyz


 Solution – F = (x+x’)y'(z+z’)+x(y+y’)z’ +xyz
F = xy’z+ xy’z’+x’y’z+x’y’z’+ xyz’+xy’z’+xyz
Boolean Expression ⁄ Function
Boolean algebra deals with binary variables and logic operation. A Boolean
Function is described by an algebraic expression called Boolean expression which
consists of binary variables, the constants 0 and 1, and the logic operation symbols.
Consider the following example.

Here the left side of the equation represents the output Y. So we can state equation no.
1

Truth Table Formation


A truth table represents a table having all combinations of inputs and their
corresponding result.
It is possible to convert the switching equation into a truth table. For example,
consider the following switching equation.

The output will be high (1) if A = 1 or BC = 1 or both are 1. The truth table for this
equation is shown by Table (a). The number of rows in the truth table is 2 n where n is
the number of input variables (n=3 for the given equation). Hence there are 2 3 = 8
possible input combination of inputs.

Methods to simplify the boolean function


The methods used for simplifying the Boolean function are as follows −
 Karnaugh-map or K-map, and
 NAND gate method.

Karnaugh-map or K-map
The Boolean theorems and the De-Morgan's theorems are useful in manipulating the
logic expression. We can realize the logical expression using gates. The number of
logic gates required for the realization of a logical expression should be reduced to a
minimum possible value by K-map method. This method can be done in two different
ways, as discussed below.
Sum of Products (SOP) Form
It is in the form of sum of three terms AB, AC, BC with each individual term is a
product of two variables. Say A.B or A.C etc. Therefore such expressions are known
as expression in SOP form. The sum and products in SOP form are not the actual
additions or multiplications. In fact they are the OR and AND functions. In SOP form,
0 represents a bar and 1 represents an unbar. SOP form is represented by .
Given below is an example of SOP.

Product of Sums (POS) Form


It is in the form of product of three terms (A+B), (B+C), or (A+C) with each term is in
the form of a sum of two variables. Such expressions are said to be in the product of
sums (POS) form. In POS form, 0 represents an unbar and 1 represents a bar. POS
form is represented by .
Given below is an example of POS.

NAND gates Realization


NAND gates can be used to simplify Boolean functions as shown in the example
below.

NAND and NOR implementation


Any logic function can be implemented using NAND gates. To achieve this, first the
logic function has to be written in Sum of Product (SOP) form. Once logic function is
converted to SOP, then is very easy to implement using NAND gate. In other words
any logic circuit with AND gates in first level and OR gates in second level can be
converted into a NAND-NAND gate circuit.

Consider the following SOP expression


F = W.X.Y + X.Y.Z + Y.Z.W

The above expression can be implemented with three AND gates in first stage and one
OR gate in second stage as shown in figure.

If bubbles are introduced at AND gates output and OR gates inputs (the same for NOR
gates), the above circuit becomes as shown in figure.

Now replace OR gate with input bubble with the NAND gate. Now we have circuit
which is fully implemented with just NAND gates.

ü Realization of logic gates using NAND gates

Implementing an inverter using NAND gate


ü Realization of logic function using NOR gates

Any logic function can be implemented using NOR gates. To achieve this, first the
logic function has to be written in Product of Sum (POS) form. Once it is converted to
POS, then it's very easy to implement using NOR gate. In other words any logic circuit
with OR gates in first level and AND gates in second level can be converted into a
NOR-NOR gate circuit.

Consider the following POS expression

F = (X+Y) . (Y+Z)
The above expression can be implemented with three OR gates in first stage and one
AND gate in second stage as shown in figure.

If bubble are introduced at the output of the OR gates and the inputs of AND gate, the
above circuit becomes as shown in figure.

Now replace AND gate with input bubble with the NOR gate. Now we have circuit
which is fully implemented with just NOR gates.
Minimization Technique

The primary objective of all simplification procedures is to obtain an expression that


has the minimum number of terms. Obtaining an expression with the minimum
number of literals is usually the secondary objective. If there is more than one possible
solution with the same number of terms, the one having the minimum number of
literals is the choice.

There are several methods for simplification of Boolean logic expressions. The
process is usually called logic minimization and the goal is to form a result which is
efficient. Two methods we will discuss are algebraic minimization and Karnaugh
maps. For very complicated problems the former method can be done using special
software analysis programs. Karnaugh maps are also limited to problems with up to 4
binary inputs. The Quine–McCluskey tabular method is used for more than 4 binary
inputs.
Introduction of K-Map (Karnaugh Map)
In many digital circuits and practical problems we need to find expression with
minimum variables. We can minimize Boolean expressions of 3, 4 variables very
easily using K-map without using any Boolean algebra theorems. K-map can take two
forms Sum of Product (SOP) and Product of Sum (POS) according to the need of
problem. K-map is table like representation but it gives more information than
TRUTH TABLE. We fill grid of K-map with 0’s and 1’s then solve it by making
groups.
Steps to solve expression using K-map-
1. Select K-map according to the number of variables.
2. Identify minterms or maxterms as given in problem.
3. For SOP put 1’s in blocks of K-map respective to the minterms (0’s elsewhere).
4. For POS put 0’s in blocks of K-map respective to the maxterms(1’s elsewhere).
5. Make rectangular groups containing total terms in power of two like 2,4,8 ..(except 1)
and try to cover as many elements as you can in one group.
6. From the groups made in step 5 find the product terms and sum them up for SOP form.
SOP FORM :
1. K-map of 3 variables –

K-map SOP form for 3 variables


Z= ∑A,B,C(1,3,6,7)
From red group we get product term—
A’C
From green group we get product term—
AB
Summing these product terms we get- Final expression (A’C+AB)

2. K-map for 4 variables –

K-map 4 variable SOP form


F(P,Q,R,S)=∑(0,2,5,7,8,10,13,15)
From red group we get product term—
QS
From green group we get product term—
Q’S’
Summing these product terms we get- Final expression (QS+Q’S’)

POS FORM :
1. K-map of 3 variables –

K-map 3 variable POS form


F(A,B,C)=π(0,3,6,7)
From red group we find terms
A B
Taking complement of these two
A' B'
Now sum up them
(A' + B')
From brown group we find terms
B C
Taking complement of these two terms
B’ C’
Now sum up them
(B’+C’)
From yellow group we find terms
A' B' C’
Taking complement of these two
ABC
Now sum up them
(A + B + C)
We will take product of these three terms : Final expression –
(A' + B’) (B’ + C’) (A + B + C)
2. K-map of 4 variables –
K-map 4 variable POS form
F(A,B,C,D)=π(3,5,7,8,10,11,12,13)

From green group we find terms


C’ D B
Taking their complement and summing them
(C+D’+B’)
From red group we find terms
C D A’
Taking their complement and summing them
(C’+D’+A)
From blue group we find terms
A C’ D’
Taking their complement and summing them
(A’+C+D)
From brown group we find terms
A B’ C
Taking their complement and summing them
(A’+B+C’)
Finally we express these as product –
(C+D’+B’).(C’+D’+A).(A’+C+D).(A’+B+C’)

5 variable K-Map in Digital Logic


 Difficulty Level : Easy
 Last Updated : 09 Jun, 2022
 Read
 Discuss
Prerequisite – Implicant in K-Map
Karnaugh Map or K-Map is an alternative way to write a truth table and is used for
the simplification of Boolean Expressions. So far we are familiar with 3 variable K-
Map & 4 variable K-Map. Now, let us discuss the 5-variable K-Map in detail. Any
Boolean Expression or Function comprising of 5 variables can be solved using the 5
variable K-Map. A K-map for a 5-variable expression can be denoted with two 4-
variable maps one beside the other. Such a 5 variable K-Map must contain
25= 32 cells to fill each minterm. As the number of variables keeps increasing, the
efficacy of the Karnaugh map decreases. Let the 5-variable Boolean function be
represented as f ( P Q R S T) where P, Q, R, S, and T are the variables and P is the
most significant bit variable and T is the least significant bit variable. The structure
of such a K-Map for SOP expression is given

below : Cell no.


written corresponding to each cell can be understood from the example described
here:

Here for variable P=0, we have Q = 0, R = 1, S = 1, T = 1 i.e. (PQRST)=(00111) . In


decimal form, this is equivalent to 7. So, for the cell shown above the corresponding
cell no. = 7. In a similar manner, we can write cell numbers corresponding to every
cell as shown in the above figure. Now let us discuss how to use a 5 variable K-Map to
minimize a Boolean Function.
Rules to be followed:
1. If a function is given in compact canonical SOP(Sum of Products) form then we
write “1” corresponding to each minterm ( provided in the question ) in the
corresponding cell numbers. For eg: For we will write
“1” corresponding to cell numbers (0, 1, 5, 7, 30 and 31).
2. If a function is given in compact canonical POS(Product of Sums) form then we
write “0” corresponding to each maxterm ( provided in the question ) in the
corresponding cell numbers. For eg: For we will
write “0” corresponding to cell numbers (0, 1, 5, 7, 30 and 31).
Steps to be followed:
1. Make the largest possible size subcube covering all the marked 1’s in case of SOP or
all marked 0’s in case of POS in the K-Map. It is important to note that each subcube
can only contain terms in powers of 2 . Also a subcube of cells is possible if and
only if in that subcube for every cell we satisfy that “m” number of cells are adjacent
cells.
2. All Essential Prime Implicants (EPIs) must be present in the minimal expressions.
I. Solving SOP function: For clear understanding, let us solve the example of SOP
function minimization of 5 Variable K-Map using the following
expression
:

In the above K-Map we


have 4 subcubes:
 Subcube 1: The one marked in red comprises cells ( 0, 4, 8, 12, 16, 20, 24, 28)
 Subcube 2: The one marked in blue comprises cells (7, 23)
 Subcube 3: The one marked in pink comprises cells ( 0, 2, 8, 10, 16, 18, 24, 26)
 Subcube 4: The one marked in yellow comprises cells (24, 25, 26, 27)
Now, while writing the minimal expression of each of the subcubes we will search for
the literal that is common to all the cells present in that subcube.
 Subcube 1: [Tex]\bar T [/Tex]
 Subcube 2: [Tex]R [/Tex] [Tex]T [/Tex]
 Subcube 3: [Tex]\bar T [/Tex]
 Subcube 4: [Tex]Q [/Tex]
Finally the minimal expression of the given boolean Function can be expressed as
follows : [Tex]\bar T [/Tex] [Tex]R [/Tex]
[Tex]T [/Tex] [Tex]\bar T [/Tex] [Tex]Q [/Tex]
II. Solving POS function: Now, let us solve the example of POS function
minimization of 5 Variable K-Map using the following
expression
:
In the above K-Map
we have 4 subcubes:
 Subcube 1: The one marked in red comprises cells ( 0, 4, 8, 12, 16, 20, 24, 28)
 Subcube 2: The one marked in blue comprises cells (7, 23)
 Subcube 3: The one marked in pink comprises cells ( 0, 2, 8, 10, 16, 18, 24, 26)
 Subcube 4: The one marked in yellow comprises cells (24, 25, 26, 27)
Now, while writing the minimal expression of each of the subcubes we will search for
the literal that is common to all the cells present in that subcube.
 Subcube 1:
 Subcube 2: [Tex]+ \bar S [/Tex]
 Subcube 3:
 Subcube 4: [Tex]+ R [/Tex]
Finally the minimal expression of the given boolean Function can be expressed as
follows
:

Don’t Care (X) Conditions in K-Maps


 Difficulty Level : Medium
 Last Updated : 03 Feb, 2023
 Read
 Discuss
One of the very significant and useful concepts in simplifying the output expression
using K-Map is the concept of “Don’t Care”. The “Don’t Care” conditions allow us to
replace the empty cell of a K-Map to form a grouping of the variables which is larger
than that of forming groups without don’t care. While forming groups of cells, we can
consider a “Don’t Care” cell as 1 or 0 or we can also ignore that cell. Therefore, the
“Don’t Care” condition can help us to form a larger group of cells.
A Don’t Care cell can be represented by a cross(X) or minus(-) or phi(Φ) in K-Maps
representing an invalid combination. For example, in the Excess-3 code system, the
states 0000, 0001, 0010, 1101, 1110, and 1111 are invalid or unspecified. These states
are called don’t cares.
A standard SOP function having don’t cares can be converted into a POS expression
by keeping don’t cares as they are, and writing the missing minterms of the SOP form
as the maxterm of POS form. Similarly, a POS function having don’t cares can be
converted to SOP form keeping the don’t cares as they are and writing the missing
maxterms of the POS expression as the minterms of SOP expression.
Example-1:
Minimise the following function in SOP minimal form using K-Maps:
f = m(1, 5, 6, 11, 12, 13, 14) + d(4)
Explanation:
The SOP K-map for the given expression is:

Therefore, SOP minimal is,

f = BC' + BCD' + A'C'D + AB'CD


Example-2:
Minimise the following function in POS minimal form using K-Maps:
F(A, B, C, D) = m(0, 1, 2, 3, 4, 5) + d(10, 11, 12, 13, 14, 15)
Explanation:
Writing the given expression in POS form:

F(A, B, C, D) = M(6, 7, 8, 9) + d(12, 13, 14, 15)


The POS K-map for the given expression is:

Therefore, POS minimal is,


F = (A'+ C)(B' + C')
Example-3:
Minimise the following function in SOP minimal form using K-Maps:
F(A, B, C, D) = m(1, 2, 6, 7, 8, 13, 14, 15) + d(0, 3, 5, 12)
Explanation:
The SOP K-map for the given expression is:
Therefore,

f = AC'D' + A'D + A'C + AB


Significance of “Don’t Care” Conditions:
Don’t Care conditions has the following significance in designing of the digital
circuits:
1. Simplification of the output:
These conditions denotes inputs that are invalid for a given digital circuit. Thus, they
can used to further simplify the boolean output expression of a digital circuit.

2. Reduction in number of gates required:


Simplification of the expression reduces the number of gates to be used for
implementing the given expression. Therefore, don’t cares make the digital circuit
design more economical.
3. Reduced Power Consumption:
While grouping the terms along with don’t cares reduces switching of the states. This
decreases the memory space that is required to represent a given digital circuit which
in turn results in less power consumption.
4. Represent Invalid States in Code Converters:
These are used in code converters. For example- In design of 4-bit BCD-to-XS-3 code
converter, the input combinations 1010, 1011, 1100, 1101, 1110, and 1111 are don’t
cares.
5. Prevention of Hazards in Digital Circuits:
Don’t cares also prevents hazards in digital systems.
Combinational Circuits
Combinational circuit is a circuit in which we combine the different gates in the circuit, for example
encoder, decoder, multiplexer and demultiplexer. Some of the characteristics of combinational
circuits are following −
 The output of combinational circuit at any instant of time, depends only on the levels present
at input terminals.
 The combinational circuit do not use any memory. The previous state of input does not have
any effect on the present state of the circuit.
 A combinational circuit can have an n number of inputs and m number of outputs.
Block diagram

We're going to elaborate few important combinational circuits as follows.


Half Adder
Half adder is a combinational logic circuit with two inputs and two outputs. The half adder circuit is
designed to add two single bit binary number A and B. It is the basic building block for addition of
two single bit numbers. This circuit has two outputs carry and sum.
Block diagram
Truth Table

Circuit Diagram

Full Adder
Full adder is developed to overcome the drawback of Half Adder circuit. It can add two one-bit
numbers A and B, and carry c. The full adder is a three input and two output combinational circuit.
Block diagram

Truth Table
Circuit Diagram

N-Bit Parallel Adder


The Full Adder is capable of adding only two single digit binary number along with a carry input.
But in practical we need to add binary numbers which are much longer than just one bit. To add
two n-bit binary numbers we need to use the n-bit parallel adder. It uses a number of full adders in
cascade. The carry output of the previous full adder is connected to carry input of the next full
adder.
4 Bit Parallel Adder
In the block diagram, A0 and B0 represent the LSB of the four bit words A and B. Hence Full
Adder-0 is the lowest stage. Hence its C in has been permanently made 0. The rest of the connections
are exactly same as those of n-bit parallel adder is shown in fig. The four bit parallel adder is a very
common logic circuit.
Block diagram

N-Bit Parallel Subtractor


The subtraction can be carried out by taking the 1's or 2's complement of the number to be
subtracted. For example we can perform the subtraction (A-B) by adding either 1's or 2's
complement of B to A. That means we can use a binary adder to perform the binary subtraction.
4 Bit Parallel Subtractor
The number to be subtracted (B) is first passed through inverters to obtain its 1's complement. The
4-bit adder then adds A and 2's complement of B to produce the subtraction. S 3 S2 S1 S0 represents
the result of binary subtraction (A-B) and carry output Cout represents the polarity of the result. If A
> B then Cout = 0 and the result of binary form (A-B) then C out = 1 and the result is in the 2's
complement form.
Block diagram

Half Subtractors
Half subtractor is a combination circuit with two inputs and two outputs (difference and borrow). It
produces the difference between the two binary bits at the input and also produces an output
(Borrow) to indicate if a 1 has been borrowed. In the subtraction (A-B), A is called as Minuend bit
and B is called as Subtrahend bit.
Truth Table

Circuit Diagram

Full Subtractors
The disadvantage of a half subtractor is overcome by full subtractor. The full subtractor is a
combinational circuit with three inputs A,B,C and two output D and C'. A is the 'minuend', B is
'subtrahend', C is the 'borrow' produced by the previous stage, D is the difference output and C' is
the borrow output.
Truth Table

Circuit Diagram

Multiplexers
Multiplexer is a special type of combinational circuit. There are n-data inputs, one output and m
select inputs with 2m = n. It is a digital circuit which selects one of the n data inputs and routes it to
the output. The selection of one of the n inputs is done by the selected inputs. Depending on the
digital code applied at the selected inputs, one out of n data sources is selected and transmitted to
the single output Y. E is called the strobe or enable input which is useful for the cascading. It is
generally an active low terminal that means it will perform the required operation when it is low.
Block diagram
Multiplexers come in multiple variations

 2 : 1 multiplexer
 4 : 1 multiplexer
 16 : 1 multiplexer
 32 : 1 multiplexer
Block Diagram

Truth Table

Demultiplexers
A demultiplexer performs the reverse operation of a multiplexer i.e. it receives one input and
distributes it over several outputs. It has only one input, n outputs, m select input. At a time only
one output line is selected by the select lines and the input is transmitted to the selected output line.
A de-multiplexer is equivalent to a single pole multiple way switch as shown in fig.
Demultiplexers comes in multiple variations.

 1 : 2 demultiplexer
 1 : 4 demultiplexer
 1 : 16 demultiplexer
 1 : 32 demultiplexer
Block diagram
Truth Table

Decoder
A decoder is a combinational circuit. It has n input and to a maximum m = 2n outputs. Decoder is
identical to a demultiplexer without any data input. It performs operations which are exactly
opposite to those of an encoder.
Block diagram

Examples of Decoders are following.

 Code converters
 BCD to seven segment decoders
 Nixie tube decoders
 Relay actuator
2 to 4 Line Decoder
The block diagram of 2 to 4 line decoder is shown in the fig. A and B are the two inputs where D
through D are the four outputs. Truth table explains the operations of a decoder. It shows that each
output is 1 for only a specific combination of inputs.
Block diagram

Truth Table
Logic Circuit

Encoder
Encoder is a combinational circuit which is designed to perform the inverse operation of the
decoder. An encoder has n number of input lines and m number of output lines. An encoder
produces an m bit binary code corresponding to the digital input number. The encoder accepts an n
input digital word and converts it into an m bit another digital word.
Block diagram

Examples of Encoders are following.

 Priority encoders
 Decimal to BCD encoder
 Octal to binary encoder
 Hexadecimal to binary encoder
Priority Encoder
This is a special type of encoder. Priority is given to the input lines. If two or more input line are 1
at the same time, then the input line with highest priority will be considered. There are four input
D0, D1, D2, D3 and two output Y0, Y1. Out of the four input D3 has the highest priority and D0 has the
lowest priority. That means if D3 = 1 then Y1 Y1 = 11 irrespective of the other inputs. Similarly if
D3 = 0 and D2 = 1 then Y1 Y0 = 10 irrespective of the other inputs.
Block diagram
Truth Table

Logic Circuit

Sequential Circuits
In our previous sections, we learned about combinational circuit and their working. The
combinational circuits have set of outputs, which depends only on the present combination of
inputs. Below is the block diagram of the synchronous logic circuit.

The sequential circuit is a special type of circuit that has a series of inputs and outputs. The outputs
of the sequential circuits depend on both the combination of present inputs and previous outputs.
The previous output is treated as the present state. So, the sequential circuit contains the
combinational circuit and its memory storage elements. A sequential circuit doesn't need to always
contain a combinational circuit. So, the sequential circuit can contain only the memory element.
Difference between the combinational circuits and sequential circuits are given below:
Combinational Circuits Sequential Circuits

1) The outputs of the combinational circuit The outputs of the sequential circuits depend
depend only on the present inputs. on both present inputs and present
state(previous output).
2) The feedback path is not present in the The feedback path is present in the sequential
combinational circuit. circuits.

3) In combinational circuits, memory elements In the sequential circuit, memory elements


are not required. play an important role and require.

4) The clock signal is not required for The clock signal is required for sequential
combinational circuits. circuits.

5) The combinational circuit is simple to design. It is not simple to design a sequential circuit.
Types of Sequential Circuits
Asynchronous sequential circuits
The clock signals are not used by the Asynchronous sequential circuits. The asynchronous circuit
is operated through the pulses. So, the changes in the input can change the state of the circuit. The
asynchronous circuits do not use clock pulses. The internal state is changed when the input variable
is changed. The un-clocked flip-flops or time-delayed are the memory elements of asynchronous
sequential circuits. The asynchronous sequential circuit is similar to the combinational circuits with
feedback.
Synchronous sequential circuits
In synchronous sequential circuits, synchronization of the memory element's state is done by the
clock signal. The output is stored in either flip-flops or latches(memory devices). The
synchronization of the outputs is done with either only negative edges of the clock signal or only
positive edges.
Clock Signal and Triggering
Clock signal
A clock signal is a periodic signal in which ON time and OFF time need not be the same. When ON
time and OFF time of the clock signal are the same, a square wave is used to represent the clock
signal. Below is a diagram which represents the clock signal:

A clock signal is considered as the square wave. Sometimes, the signal stays at logic, either high
5V or low 0V, to an equal amount of time. It repeats with a certain time period, which will be equal
to twice the 'ON time' or 'OFF time'.
Types of Triggering
These are two types of triggering in sequential circuits:
Level triggering
The logic High and logic Low are the two levels in the clock signal. In level triggering, when the
clock pulse is at a particular level, only then the circuit is activated. There are the following types of
level triggering:
Positive level triggering
In a positive level triggering, the signal with Logic High occurs. So, in this triggering, the circuit is
operated with such type of clock signal. Below is the diagram of positive level triggering:
Negative level triggering
In negative level triggering, the signal with Logic Low occurs. So, in this triggering, the circuit is
operated with such type of clock signal. Below is the diagram of Negative level triggering:

Edge triggering
In clock signal of edge triggering, two types of transitions occur, i.e., transition either from Logic
Low to Logic High or Logic High to Logic Low.
Based on the transitions of the clock signal, there are the following types of edge triggering:
Positive edge triggering
The transition from Logic Low to Logic High occurs in the clock signal of positive edge triggering.
So, in positive edge triggering, the circuit is operated with such type of clock signal. The diagram
of positive edge triggering is given below.

Negative edge triggering


The transition from Logic High to Logic low occurs in the clock signal of negative edge triggering.
So, in negative edge triggering, the circuit is operated with such type of clock signal. The diagram
of negative edge triggering is given below.

Latches in Digital Logic


Latches are digital circuits that store a single bit of information and hold its value until it is updated
by new input signals. They are used in digital systems as temporary storage elements to store binary
information. Latches can be implemented using various digital logic gates, such as AND, OR,
NOT, NAND, and NOR gates.
There are two types of latches:
1. S-R (Set-Reset) Latches: S-R latches are the simplest form of latches and are implemented
using two inputs: S (Set) and R (Reset). The S input sets the output to 1, while the R input resets
the output to 0. When both S and R are at 1, the latch is said to be in an “undefined” state.
2. D (Data) Latches: D latches are also known as transparent latches and are implemented using
two inputs: D (Data) and a clock signal. The output of the latch follows the input at the D
terminal as long as the clock signal is high. When the clock signal goes low, the output of the
latch is stored and held until the next rising edge of the clock.
3. Latches are widely used in digital systems for various applications, including data storage,
control circuits, and flip-flop circuits. They are often used in combination with other digital
circuits to implement sequential circuits, such as state machines and memory elements.
4. In summary, latches are digital circuits that store a single bit of information and hold its value
until it is updated by new input signals. There are two types of latches: S-R (Set-Reset) Latches
and D (Data) Latches, and they are widely used in digital systems for various applications.
Latches are basic storage elements that operate with signal levels (rather than signal transitions).
Latches controlled by a clock transition are flip-flops. Latches are level-sensitive devices. Latches
are useful for the design of the asynchronous sequential circuit. Latches are sequential circuit with
two stable states. These are sensitive to the input voltage applied and does not depend on the clock
pulse. Flip flops that do not use clock pulse are referred to as latch.
SR (Set-Reset) Latch – They are also known as preset and clear states. The SR latch forms the
basic building blocks of all other types of flip-flops.
SR Latch is a circuit with:
(i) 2 cross-coupled NOR gate or 2 cross-coupled NAND gate.
(ii) 2 input S for SET and R for RESET.
(iii) 2 output Q, Q’.

Q Q STAT
’ E
1 0 Set
0 1 Reset
Under normal conditions, both the input remains 0. The following is the RS Latch with NAND
gates:

Case-1: S’=R’=1 (S=R=0) –


If Q = 1, Q and R’ inputs for 2nd NAND gate are both 1.
If Q = 0, Q and R’ inputs for 2nd NAND gate are 0 and 1 respectively.

Case-2: S’=0, R’=1 (S=1, R=0) –


As S’=0, the output of 1st NAND gate, Q = 1(SET state). In 2nd NAND gate, as Q and R’ inputs
are 1, Q’=0.
Case-3: S’= 1, R’= 0 (S=0, R=1) –
As R’=0, the output of 2nd NAND gate, Q’ = 1. In 1st NAND gate, as Q and S’ inputs are 1,
Q=0(RESET state).

Case-4: S’= R’= 0 (S=R=1) –


When S=R=1, both Q and Q’ becomes 1 which is not allowed. So, the input condition is
prohibited.
The SR Latch using NOR gate is shown below:

Gated SR Latch –
A Gated SR latch is a SR latch with enable input which works when enable is 1 and retain the
previous state when enable is 0.

Gated D Latch –
D latch is similar to SR latch with some modifications made. Here, the inputs are complements of
each other. The letter in the D latch stands for “data” as this latch stores single bit temporarily.
The design of D latch with Enable signal is given below:
The truth table for the D-Latch is shown below:

Enable D Q(n) Q(n+1) STATE


1 0 x 0 RESET
1 1 x 1 SET
0 x x Q(n) No Change
As the output is same as the input D, D latch is also called as Transparent Latch. Considering the
truth table, the characteristic equation for D latch with enable input can be given as:

Q(n+1) = EN.D + EN'.Q(n)

Advantages of Latches:
1. Easy to Implement: Latches are simple digital circuits that can be easily implemented using
basic digital logic gates.
2. Low Power Consumption: Latches consume less power compared to other sequential circuits
such as flip-flops.
3. High Speed: Latches can operate at high speeds, making them suitable for use in high-speed
digital systems.
4. Low Cost: Latches are inexpensive to manufacture and can be used in low-cost digital systems.
5. Versatility: Latches can be used for various applications, such as data storage, control circuits,
and flip-flop circuits.
Disadvantages of Latches:
1. No Clock: Latches do not have a clock signal to synchronize their operations, making their
behavior unpredictable.
2. Unstable State: Latches can sometimes enter into an unstable state when both inputs are at 1.
This can result in unexpected behavior in the digital system.
3. Complex Timing: The timing of latches can be complex and difficult to specify, making them
less suitable for real-time control applications.

Digital Circuits - Flip-Flops


Those are the basic building blocks of flip-flops. We can implement flip-flops in two methods.
In first method, cascade two latches in such a way that the first latch is enabled for every positive
clock pulse and second latch is enabled for every negative clock pulse. So that the combination of
these two latches become a flip-flop.
In second method, we can directly implement the flip-flop, which is edge sensitive. In this chapter,
let us discuss the following flip-flops using second method.
 SR Flip-Flop
 D Flip-Flop
 JK Flip-Flop
 T Flip-Flop
SR Flip-Flop
SR flip-flop operates with only positive clock transitions or negative clock transitions. Whereas, SR
latch operates with enable signal. The circuit diagram of SR flip-flop is shown in the following
figure.
This circuit has two inputs S & R and two outputs Qt� & Qt�’. The operation of SR flipflop is
similar to SR Latch. But, this flip-flop affects the outputs only when positive transition of the clock
signal is applied instead of active enable.
The following table shows the state table of SR flip-flop.
S R Qt+1�+1
0 0 Qt�
0 1 0
1 0 1
1 1 -
Here, Qt� & Qt+1�+1 are present state & next state respectively. So, SR flip-flop can be used for
one of these three functions such as Hold, Reset & Set based on the input conditions, when positive
transition of clock signal is applied. The following table shows the characteristic table of SR flip-
flop.
Present Inputs Present State Next State
S R Qt� Qt+1�+1
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 x
1 1 1 x
By using three variable K-Map, we can get the simplified expression for next state, Qt+1�+1.
The three variable K-Map for next state, Qt+1�+1 is shown in the following figure.

The maximum possible groupings of adjacent ones are already shown in the figure. Therefore,
the simplified expression for next state Qt+1�+1 is
Q(t+1)=S+R′Q(t)�(�+1)=�+�′�(�)
D Flip-Flop
D flip-flop operates with only positive clock transitions or negative clock transitions. Whereas, D
latch operates with enable signal. That means, the output of D flip-flop is insensitive to the changes
in the input, D except for active transition of the clock signal. The circuit diagram of D flip-flop is
shown in the following figure.
This circuit has single input D and two outputs Qt� & Qt�’. The operation of D flip-flop is
similar to D Latch. But, this flip-flop affects the outputs only when positive transition of the clock
signal is applied instead of active enable.
The following table shows the state table of D flip-flop.
D Qt + 1t + 1
0 0
1 1
Therefore, D flip-flop always Hold the information, which is available on data input, D of earlier
positive transition of clock signal. From the above state table, we can directly write the next state
equation as
Qt+1�+1 = D
Next state of D flip-flop is always equal to data input, D for every positive transition of the clock
signal. Hence, D flip-flops can be used in registers, shift registers and some of the counters.
JK Flip-Flop
JK flip-flop is the modified version of SR flip-flop. It operates with only positive clock transitions
or negative clock transitions. The circuit diagram of JK flip-flop is shown in the following figure.

This circuit has two inputs J & K and two outputs Qt� & Qt�’. The operation of JK flip-flop is
similar to SR flip-flop. Here, we considered the inputs of SR flip-flop as S = J Qt�’ and R =
KQt� in order to utilize the modified SR flip-flop for 4 combinations of inputs.
The following table shows the state table of JK flip-flop.
J K Qt+1�+1
0 0 Qt�
0 1 0
1 0 1
1 1 Qt�'
Here, Qt� & Qt+1�+1 are present state & next state respectively. So, JK flip-flop can be used for
one of these four functions such as Hold, Reset, Set & Complement of present state based on the
input conditions, when positive transition of clock signal is applied. The following table shows
the characteristic table of JK flip-flop.
Present Inputs Present State Next State
J K Qt� Qt+1�+1
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 0
By using three variable K-Map, we can get the simplified expression for next state,
Qt+1�+1. Three variable K-Map for next state, Qt+1�+1 is shown in the following figure.

The maximum possible groupings of adjacent ones are already shown in the figure. Therefore,
the simplified expression for next state Qt+1�+1 is
Q(t+1)=JQ(t)′+K′Q(t)�(�+1)=��(�)′+�′�(�)
T Flip-Flop
T flip-flop is the simplified version of JK flip-flop. It is obtained by connecting the same input ‘T’
to both inputs of JK flip-flop. It operates with only positive clock transitions or negative clock
transitions. The circuit diagram of T flip-flop is shown in the following figure.

This circuit has single input T and two outputs Qt� & Qt�’. The operation of T flip-flop is same
as that of JK flip-flop. Here, we considered the inputs of JK flip-flop as J = T and K = T in order to
utilize the modified JK flip-flop for 2 combinations of inputs. So, we eliminated the other two
combinations of J & K, for which those two values are complement to each other in T flip-flop.
The following table shows the state table of T flip-flop.
D Qt+1�+1
0 Qt�
1 Qt�’
Here, Qt� & Qt+1�+1 are present state & next state respectively. So, T flip-flop can be used for
one of these two functions such as Hold, & Complement of present state based on the input
conditions, when positive transition of clock signal is applied. The following table shows
the characteristic table of T flip-flop.
Inputs Present State Next State
T Qt� Qt+1�+1
0 0 0
0 1 1
1 0 1
1 1 0
From the above characteristic table, we can directly write the next state equation as
⇒Q(t+1)=T⊕Q(t)⇒�(�+1)=�⊕�(�)
Q(t+1)=T′Q(t)+TQ(t)′�(�+1)=�′�(�)+��(�)′

The output of T flip-flop always toggles for every positive transition of the clock signal, when input
T remains at logic High 11. Hence, T flip-flop can be used in counters.
In this chapter, we implemented various flip-flops by providing the cross coupling between NOR
gates. Similarly, you can implement these flip-flops by using NAND gates.

Digital Registers
Flip-flop is a 1 bit memory cell which can be used for storing the digital data. To increase the
storage capacity in terms of number of bits, we have to use a group of flip-flop. Such a group of
flip-flop is known as a Register. The n-bit register will consist of n number of flip-flop and it is
capable of storing an n-bit word.
The binary data in a register can be moved within the register from one flip-flop to another. The
registers that allow such data transfers are called as shift registers. There are four mode of
operations of a shift register.
 Serial Input Serial Output
 Serial Input Parallel Output
 Parallel Input Serial Output
 Parallel Input Parallel Output
Bidirectional Shift Register
 If a binary number is shifted left by one position then it is equivalent to multiplying the
original number by 2. Similarly if a binary number is shifted right by one position then it is
equivalent to dividing the original number by 2.
 Hence if we want to use the shift register to multiply and divide the given binary number,
then we should be able to move the data in either left or right direction.
 Such a register is called bi-directional register. A four bit bi-directional shift register is
shown in fig.
 There are two serial inputs namely the serial right shift data input DR, and the serial left shift
data input DL along with a mode select input (M).
Block Diagram

Operation
S.N. Condition Operation
1 With M = 1 − Shift right operation If M = 1, then the AND gates 1, 3, 5 and 7 are
enabled whereas the remaining AND gates 2, 4,
6 and 8 will be disabled.
The data at DR is shifted to right bit by bit from
FF-3 to FF-0 on the application of clock pulses.
Thus with M = 1 we get the serial right shift
operation.
2 With M = 0 − Shift left operation When the mode control M is connected to 0
then the AND gates 2, 4, 6 and 8 are enabled
while 1, 3, 5 and 7 are disabled.
The data at DL is shifted left bit by bit from FF-
0 to FF-3 on the application of clock pulses.
Thus with M = 0 we get the serial right shift
operation.
Universal Shift Register
A shift register which can shift the data in only one direction is called a uni-directional shift
register. A shift register which can shift the data in both directions is called a bi-directional shift
register. Applying the same logic, a shift register which can shift the data in both directions as well
as load it parallely, is known as a universal shift register. The shift register is capable of performing
the following operation −
 Parallel loading
 Left Shifting
 Right shifting
The mode control input is connected to logic 1 for parallel loading operation whereas it is
connected to 0 for serial shifting. With mode control pin connected to ground, the universal shift
register acts as a bi-directional register. For serial left operation, the input is applied to the serial
input which goes to AND gate-1 shown in figure. Whereas for the shift right operation, the serial
input is applied to D input.
Block Diagram

Digital Counters
Counter is a sequential circuit. A digital circuit which is used for a counting pulses is known
counter. Counter is the widest application of flip-flops. It is a group of flip-flops with a clock signal
applied. Counters are of two types.
 Asynchronous or ripple counters.
 Synchronous counters.
Asynchronous or ripple counters
The logic diagram of a 2-bit ripple up counter is shown in figure. The toggle (T) flip-flop are being
used. But we can use the JK flip-flop also with J and K connected permanently to logic 1. External
clock is applied to the clock input of flip-flop A and Q A output is applied to the clock input of the
next flip-flop i.e. FF-B.
Logical Diagram

Operation
S.N Condition Operation
.
1 Initially let both the FFs be in the reset state QBQA = 00 initially
2 After 1st negative clock edge As soon as the first negative clock edge
is applied, FF-A will toggle and QA will
be equal to 1.
QA is connected to clock input of FF-B.
Since QA has changed from 0 to 1, it is
treated as the positive clock edge by FF-
B. There is no change in QB because FF-
B is a negative edge triggered FF.
QBQA = 01 after the first clock pulse.
3 After 2nd negative clock edge On the arrival of second negative clock
edge, FF-A toggles again and QA = 0.
The change in QA acts as a negative
clock edge for FF-B. So it will also
toggle, and QB will be 1.
QBQA = 10 after the second clock pulse.
4 After 3rd negative clock edge On the arrival of 3rd negative clock
edge, FF-A toggles again and
QA become 1 from 0.
Since this is a positive going change,
FF-B does not respond to it and remains
inactive. So QB does not change and
continues to be equal to 1.
QBQA = 11 after the third clock pulse.
5 After 4th negative clock edge On the arrival of 4th negative clock
edge, FF-A toggles again and
QA becomes 1 from 0.
This negative change in QA acts as clock
pulse for FF-B. Hence it toggles to
change QB from 1 to 0.
QBQA = 00 after the fourth clock pulse.
Truth Table

Classification of counters
Depending on the way in which the counting progresses, the synchronous or asynchronous counters
are classified as follows −
 Up counters
 Down counters
 Up/Down counters
Synchronous Sequential Circuits in Digital Logic
Synchronous sequential circuits are digital circuits that use clock signals to determine the timing of
their operations. They are commonly used in digital systems to implement timers, counters, and
memory elements.
1. In a synchronous sequential circuit, the state of the circuit changes only on the rising or falling
edge of the clock signal, and all changes in the circuit are synchronized to this clock. This
makes the behavior of the circuit predictable and ensures that all elements of the circuit change
at the same time, preventing race conditions and making the circuit easier to design and debug.
2. Synchronous sequential circuits can be implemented using flip-flops, which are circuits that
store binary values and maintain their state even when the inputs change. The output of the flip-
flops is determined by the current inputs and the previous state stored in the flip-flops, and the
next state is determined by the state transition function, which is a Boolean function that
describes the behavior of the circuit.
3. In summary, synchronous sequential circuits are digital circuits that use clock signals to
determine the timing of their operations. They are commonly used in digital systems to
implement timers, counters, and memory elements and are essential components in digital
systems design.
Steps to solve a problem:
1. Draw the state diagram from the problem statement or from the given state
table. Example: Serial Adder. The functioning of serial adder can be depicted by the following
state diagram. X1 and X2 are inputs, A and B are states representing

carry.
2. Draw the state table. If there is any redundant state then reduce the state

table.
3. Select state assignment i.e. assign binary numbers to the states according to total number states.
Also decide the memory element (flip-flops) for the circuit. A -> 0 B -> 1 4. Replace the
assignments in the state table to obtain Transition

table:
5. Separate the output table from the transition

table.
z = x1x’2y+x’1x2y’+x1x2y+x1x’2y’
6. Excitation table for the flip-flop is obtained from the transition table using the output of flip-

flop. Excitation table for D flip-flop:


D = x1 x2 +x1 y+x2 y
7. Draw the circuit diagram using gates and flip-

flops.
This article is contributed by Kriti Kushwaha
Please write comments if you find anything incorrect, or you want to share more information about
the topic discussed above.
Advantages of Synchronous Sequential Circuits:
1. Predictable behavior: The use of a clock signal makes the behavior of a synchronous sequential
circuit predictable and deterministic, which is important for real-time control applications.
2. Synchronization: Synchronous sequential circuits ensure that all elements of the circuit change
at the same time, preventing race conditions and making the circuit easier to design and debug.
3. Timing constraints: The timing constraints in a synchronous sequential circuit are well-defined,
making it easier to design and test the circuit.
4. Easy to implement: Synchronous sequential circuits can be implemented using flip-flops, which
are simple and widely available digital components.
Disadvantages of Synchronous Sequential Circuits:
1. Clock skew: Clock skew is a timing error that occurs when the clock signal arrives at different
flip-flops at different times. This can cause errors in the operation of the circuit.
2. Timing jitter: Timing jitter is a variation in the arrival time of the clock signal that can cause
errors in the operation of the circuit.
3. Complex design: The design of synchronous sequential circuits can be complex, especially for
large systems with many state transitions.
4. Power consumption: The use of a clock signal increases the power consumption of a
synchronous sequential circuit compared to asynchronous sequential circuits.

Asynchronous Sequential Circuits

Prerequisite – Introduction of Sequential Circuits


Asynchronous sequential circuits, also known as self-timed or ripple-clock circuits, are digital
circuits that do not use a clock signal to determine the timing of their operations. Instead, the state
of the circuit changes in response to changes in the inputs.
1. In an asynchronous sequential circuit, each flip-flop has a different set of inputs and outputs,
and the state of the circuit is determined by the outputs of the flip-flops. The state transition
function, which is a Boolean function that describes the behaviour of the circuit, determines the
next state of the circuit based on the current inputs and the previous state stored in the flip-flops.
2. Asynchronous sequential circuits are used in digital systems to implement state machines,
which are digital circuits that change their output based on the current state and the inputs. They
are commonly used in applications that require low power consumption or where a clock signal
is not available or practical to use.
3. In summary, asynchronous sequential circuits are digital circuits that do not use a clock signal
to determine the timing of their operations. They are used in digital systems to implement state
machines and are commonly used in applications that require low power consumption or where
a clock signal is not available or practical to use.
Sequential circuits are those which use previous and current input variables by storing their
information and placing them back into the circuit on the next clock (activation) cycle.
There are two types of input to the combinational logic. External inputs which come from outside
the circuit design are not controlled by the circuit Internal inputs are functions of a previous
output state.
Asynchronous sequential circuits do not use clock signals as synchronous circuits do. Instead, the
circuit is driven by the pulses of the inputs which mean the state of the circuit changes when the
inputs change. Also, they don’t use clock pulses. The change of internal state occurs when there is a
change in the input variable. Their memory elements are either un-clocked flip-flops or time-delay
elements. They are similar to combinational circuits with feedback.
Advantages –
 No clock signal, hence no waiting for a clock pulse to begin processing inputs, therefore fast.
Their speed is faster and theoretically limited only by propagation delays of the logic gates.
 Robust handling. Higher performance function units, which provide average-case completion
rather than worst-case completion. Lower power consumption because no transistor
transitions when it is not performing useful computation. The absence of clock drivers reduces
power consumption. Less severe electromagnetic interference (EMI).
 More tolerant to process variations and external voltage fluctuations. Achieve high performance
while gracefully handling variable input and output rates and mismatched pipeline stage delays.
Freedom from difficulties of distributing a high-fan-out, timing-sensitive clock signal. Better
modularity.
 Less assumptions about the manufacturing process. Circuit speed adapts to changing
temperature and voltage conditions. Immunity to transistor-to-transistor variability in the
manufacturing process, which is one of the most serious problems faced by the semiconductor
industry
 Lower power consumption: Asynchronous sequential circuits do not require a clock signal,
which reduces power consumption compared to synchronous sequential circuits.
 More robust: Asynchronous sequential circuits are less sensitive to timing errors, such as clock
skew and jitter, which can cause errors in the operation of synchronous sequential circuits.
 Simpler design: Asynchronous sequential circuits do not require the synchronization logic that
is required in synchronous sequential circuits, making their design simpler.
 More flexible: Asynchronous sequential circuits can be designed to change their state in
response to changes in the inputs, which makes them more flexible and adaptable to changing
conditions.

Disadvantages –
 Some asynchronous circuits may require extra power for certain operations.
 More difficult to design and subject to problems like sensitivity to the relative arrival times of
inputs at gates. If transitions on two inputs arrive at almost the same time, the circuit can go into
the wrong state depending on slight differences in the propagation delays of the gates which are
known as race condition.
 The number of circuit elements (transistors) maybe double that of synchronous circuits. Fewer
people are trained in this style compared to synchronous design. Difficult to test and debug.
Their output is uncertain.
 The performance of asynchronous circuits may be reduced in architectures that have a complex
data path. Lack of dedicated, asynchronous design-focused commercial EDA tools.
 Unpredictable behavior: The lack of a clock signal makes the behavior of asynchronous
sequential circuits unpredictable, which can make them harder to design and debug.
 Timing constraints: The timing constraints in asynchronous sequential circuits are more
complex and difficult to specify compared to synchronous sequential circuits.
 Complex design: The design of asynchronous sequential circuits can be complex, especially for
large systems with many state transitions.
 Limited use: Asynchronous sequential circuits are not suitable for real-time control
applications, where a clock signal is required to ensure predictable behavior.

You might also like