0% found this document useful (0 votes)
2 views14 pages

2 Numeration Systems

The document discusses numeration systems, highlighting the distinction between non-positional and positional systems, and their implications for arithmetic operations. It covers various bases including binary, octal, decimal, and hexadecimal, along with methods for converting between these systems. Additionally, it explains signed number representations, floating-point number representation, and the IEEE standards for single and double precision formats.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views14 pages

2 Numeration Systems

The document discusses numeration systems, highlighting the distinction between non-positional and positional systems, and their implications for arithmetic operations. It covers various bases including binary, octal, decimal, and hexadecimal, along with methods for converting between these systems. Additionally, it explains signed number representations, floating-point number representation, and the IEEE standards for single and double precision formats.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

2.

NUMERATION SYSTEMS AND DIGITAL SYSTEM


REPRESENTATION
2.1. NUMERATION SYSTEMS
The need to quantify and express the values of quantities led humans to invent
numeration systems. Throughout history, people have found ways to express values of
quantities they measure in several ways. A variety of words and special symbols, called
numerals, have been used to communicate number ideas. How one expresses numbers
using these special symbols make up a numeration system.

2.2. REPRESENTATION AND ARITHMETIC

The way in which numbers are represented (called a numeration system)


has important implications for how arithmetic works. The main number
Systems are of two types are:
1. Non Positional Number System
2. Positional Number System

 Non-Positional Number Systems

In early days, human begins counted on fingers. When ten fingers were not adequate,
stones, pebbles, or sticks were used to indicate values. This method of counting uses an
additive approach or the non-positional number system. In this system, we have symbols
such as I for 1, II for 2, III for 3, IIII for 4, IIIII for 5, etc. Each symbol represents the
same value regardless of its position in the number and the symbols are simply added to
find out the value of a particular number. Since it is very difficult to perform arithmetic
with such a number system, positional systems were developed.

 Positional Number Systems


In a positional number system, there are only a few symbols called digits, and these
symbols represent different values depending on the position they occupy in the number.
The value of each digit in such a number is determined by three considerations:
1. The digit itself,
2. The position of the digit in the number, and
3. The base of the number system.

The base of a number system or radix defines the range of values that a digit may have.
In the binary system or base 2, there can be only two values for each digit of a number,

5
either a "0" or a "1". In the octal system or base 8, there can be eight choices for each
digit of a number:
"0", "1", "2", "3", "4", "5", "6", "7".
In the decimal system or base 10, there are ten different values for each digit of a number:
"0", "1", "2", "3", "4", "5", "6", "7", "8", "9".
In the hexadecimal system, 16 numerals are used:
"0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "A", "B", "C", "D", "E", and "F".
Where “A” stands for 10, “B” for 11 and so on.

2.3. DECIMAL (BASE 10) NUMERATION SYSTEM


The value of a number is weighted sum of its digits. Consider the decimal number 2357.
It can be expressed as
2357 = 2 x 103 + 3 x 102 + 5 x 101 + 7 x 100

Each weight is a power of 10 corresponding to the digit's position. A decimal point


allows negative as well as positive powers of 10 to be used.

526.47 = 5 x 102 +2 x 101 + 6 x 100 + 4 x 10-1 + 7 x 10-2


Here, 10 is called the base or radix of the number system.

2.4. BINARY(BASE 2) AND HEXADECIMAL (BASE 16) NUMERATION


As stated earlier, the binary system uses only two values for each digit of a number, "0"
or "1". With the radix value of 2, the binary number system requires very long strings of
1s and 0s to represent a given number. Some of the problems associated with handling
large strings of binary digits may be eased by grouping them into three digits or four
digits. We can use the following groupings.

 Octal (radix 8 to group three binary digits)

 Hexadecimal (radix 16 to group four binary digits)


Conversion of a binary number to an octal number or a hexadecimal number is very
simple, as it requires simple grouping of the binary digits into groups of three (octal) or
four (hexadecimal). Consider the binary number 11011011. It may be converted into
octal or hexadecimal numbers as

(11011001)2 = (011) (011) (001) = (331)8


= (1101) (1001) = (D9)16

2.5. NUMBER SYSTEM CONVERSIONS

6
In general, conversion between numbers with different radices (bases) cannot be done by
simple substitutions. Such conversions would involve arithmetic operations.
 Conversion from other bases to radix 10

Let us work out procedures for converting a number in any radix to radix 10, and vice-
versa. The decimal equivalent value of a number in any radix is given by the formula

where r is the radix of the number and there are p digits to the left of the radix point and n
digits to the right. Decimal value of the number is determined by converting each digit of
the number to its radix-10 equivalent and expanding.

Some examples are:


(331)8 = 3 x 82 + 3 x 81 + 1 x 80 = 192 + 24 + 1 = (217)10
(D9)16 = 13 x 161 + 9 x 160 = 208 + 9 = (217)10
(33.56)8 = 3 x 81 + 3 x 80 + 5 x 8-1 + 6 x 8-2 = (27.69875)10
(E5.A)16 = 14 x 161 + 5 x 160 + 10 x 16-1 = (304.625)10

 Convert from Decimal to Any Base

The easiest way to convert fixed point numbers to any base is to convert each part
separately. We begin by separating the number into its integer and fractional part. The
integer part is converted using the remainder method, by using a successive division of
the number by the base until a zero is obtained. At each division, the reminder is kept and
then the new number in the base r is obtained by reading the remainder from the lat
remainder upwards.

Example 1. Convert the decimal number 3315 to hexadecimal number.

Solution:

7
Example 2. Convert the decimal number 41 into a binary number.

Solution: Using repeated division, the

2.6. REPRESENTATION OF SIGNED (NEGATIVE AND POSITIVE) NUMBERS


In our traditional arithmetic we use the "+" sign before a number to indicate it as a
positive number and a "-" sign to indicate it as a negative number. We usually omit the
sign before the number if it is positive. This method of representation of numbers is
called "sign-magnitude" representation. But using "+" and "-" signs on a computer is not
convenient, and it becomes necessary to have some other convention to represent the
signed numbers.

2.6.1. Sign-Magnitude representation


In the sign-magnitude representation of binary numbers the first digit is always treated as
the sign bit, and the remaining bits represent the magnitude of the number. Therefore, in
working with the signed binary numbers in sign-magnitude form the leading zeros should
not be ignored. However, the leading zeros can be ignored after the sign bit is separated.
For example,
1000101.11 = - 101.11

Here are a few examples of signed binary numbers and their corresponding sign-
magnitude representations:

While the sign-magnitude representation of signed numbers appears to be natural


extension of the traditional arithmetic, the arithmetic operations with signed numbers in
this form are not that very convenient, either for implementation on the computer or for
hardware implementation.
8
2.6.2. One’s Complement Representation
 The most significant bit (MSB) represents the sign
 If MSB is a ‘0’, the number is positive. The remaining (n-1) bits directly indicate
the magnitude
 If the MSB is ‘1’, the number is negative
Complement of all the remaining (n-1) bits gives the magnitude

Example: One‘s complement


1111001→(1)(111001)
First (sign) bit is 1: The number is negative
One’s Complement of 111001 = 000110 = (6)10.
Range of n-bit numbers
One’s complement numbers:
0111111 --> + 63
0000110 --> + 6
0000000 --> + 0
1111111 --> - 0
1111001 --> - 6
1000000 --> - 63
‘0’ is represented by 0000000 and 1111111
A 7- bit number covers the range from +63 to -63. An n-bit number has a range from
+(2n-1 - 1) to -(2n-1 - 1)

Two’s Complement Representation


If MSB is a ‘0’
 The number is positive
 Remaining (n-1) bits directly indicate the magnitude
If the MSD is .1.
 The number is negative
 Magnitude is obtained by complementing all the remaining (n-1) bits and adding a 1

Example: Two’s complement


1111010 →(1)111010
First (sign) bit is 1: The number is negative
Complement 111010 and add 1→ 000101 + 1 = 000110 = (6)10

Range of n-bit numbers


Two.s complement numbers:
0111111 ≈ + 63
0000110 ≈ +6
0000000 ≈ + 0
1111010 ≈ - 6
1000001 ≈ - 63
1000000 ≈ - 64
‘0’ is represented by 000.....0. A 7- bit number covers the range from +63 to -64.
AN n-bit number has a range from +(2n-1 - 1) to -(2n-1).
9
2.7. FLOATING-POINT NUMBER REPRESENTATION
Floating point is a numerical representation system in which a string of digits represent a
real number. The name floating point refers to the fact that the radix point (decimal point,
binary point, etc.) can be placed anywhere relative to the digits within the string. A
floating-point number (or real number) can represent a very large (e.g. 1.23×1088) or a
very small (1.23×10-88) value.

A floating-point number is typically expressed in the scientific notation, with


a fraction (F), and an exponent (E) of a certain radix (r), in the form of F×rE. Decimal
numbers use radix of 10 (F×10E); while binary numbers use radix of 2 (F×2E).

There are three parts in the floating-point representation:


 The sign bit (S) (0 for positive numbers and 1 for negative numbers).
 For the exponent (E), a so-called bias (or excess) is applied so as to represent both
positive and negative exponent. The bias is set at half of the range. For single
precision with an 8-bit exponent, the bias is 127 (or excess-127). For double
precision with a 11-bit exponent, the bias is 1023 (or excess-1023).
 The fraction (F) (also called the mantissa or significant) is composed of an implicit
leading bit (before the radix point) and the fractional bits (after the radix point). The
leading bit for normalized numbers is 1; while the leading bit for denormalized
numbers is 0.

Representation of floating point numbers is not unique. For example, the


number 55.66 can be represented as 5.566×101, 0.5566×102, 0.05566×103, and so on. The
fractional part can be normalized. In the normalized form, there is only a single non-zero
digit before the radix point. For example, decimal number 123.4567 can be normalized
as 1.234567×102; binary number 1010.1011B can be normalized as 1.0101011B×23.

In the 1960's and 1970's, each computer manufacturer developed its own floating point
system, leading to a lot of inconsistency as to how the same program behaved on
different machines. Through the efforts of many computer scientists, a binary system was
developed in the early eighties. This standard has become known as the IEEE floating
point standard. There are two main representation schemes: 32-bit single-precision and
64-bit double-precision. There is a third IEEE standard - IEEE Quad precision. The
IEEE Quad precision floating point standard representation requires a 128 bit word,
which may be represented as numbered from 0 to 127, left to right. The first bit is the
sign bit, S, the next fifteen bits are the exponent bits, 'E', and the final 128 bits are the
fraction 'F':

10
2.7.1. IEEE 32-bit single-precision numbers
In 32-bit single-precision floating-point representation:
 The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for
negative numbers.
 The following 8 bits represent exponent (E).
 The remaining 23 bits represents fraction (F).

Normalized Form of Floating Point Numbers


Let's illustrate with an example, suppose that the 32-bit pattern is
1 1000 0001 011 0000 0000 0000 0000 0000, with:
 S=1
 E = 1000 0001
 F = 011 0000 0000 0000 0000 0000
In the normalized form, the actual fraction is normalized with a leading 1 in the form
of 1.F. In this example, the actual fraction is
1.011 0000 0000 0000 0000 0000 = 1 + 1×2-2 + 1×2-3 = 1.37510.
The sign bit represents the sign of the number, with S=0 for positive and S=1 for negative
number. In this example with S=1, this is a negative number, i.e., -1.37510.
In normalized form, the actual exponent is E-127 (so-called excess-127 or bias-127). This
is because we need to represent both positive and negative exponent.

With an 8-bit E, ranging from 0 to 255, the excess-127 scheme could provide actual
exponent of -127 to 128. In this example, E-127=129-127=210.
Hence, the number represented is -1.375×2^2=-5.510.

For example, the decimal number 46 can be expressed in IEEE single precision floating
point number as follows:
(46)10= 1011102 = 1.01110e+5
The fraction or mantissa F= 01011100 0000000 00000000
The exponent E= 00000 101.

11
The biased exponent = 00000101 + 01111111 = 10000100.
The number is positive, therefore the sign bit is = 0:
Thus, the number is given as:
01000010001011100 0000000 00000000
That is,

De-Normalized Form

Normalized form has a serious problem, with an implicit leading 1 for the fraction, it
cannot represent the number zero! De-normalized form was devised to represent zero and
other numbers.
For E=0, the numbers are in the de-normalized form. An implicit leading 0 (instead of 1)
is used for the fraction; and the actual exponent is always -126. Hence, the number zero
can be represented with E=0and F=0 (because 0.0×2-126=0).

2.7.2. IEEE 64-bit double-precision numbers


The representation scheme for 64-bit double-precision is similar to the 32-bit single-
precision:
 The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for
negative numbers.
 The following 11 bits represent exponent (E).
 The remaining 52 bits represents fraction (F).

The value (N) is calculated as follows:


 Normalized form: For 1 ≤ E ≤ 2046, N = (-1)S × 1.F × 2(E-1023).
 Denormalized form: For E = 0, N = (-1)S × 0.F × 2(-1022). These are in the
denormalized form.
 For E = 2047, N represents special values, such as ±INF (infinity), NaN (not a
number).
12
2.8. BINARY CODES
Encoding is the process of altering the characteristics of information according to agreed
scheme to make it more suitable for intended application. By assigning each item of
information a unique combination of 1s and 0s we transform some given information into
binary coded form. We need and use coding of information for a variety of reasons
 to increase efficiency of transmission,
 to detect and/or correct errors
 simplify information processing,
 for security reasons to limit the accessibility of information

 to standardise universal codes that can be used by all


Coding schemes have to be designed to suit the security requirements and the complexity
of the medium over which information is transmitted.
Decoding is the process of reconstructing source information from the encoded
information. Decoding process can be more complex than coding if we do not have prior
knowledge of coding schemes.
In binary coding we use binary digits or bits (0 and 1) to code the elements of an
information set. Let n be the number of bits in the code word and x be the number of
unique words.
If n = 1, then x = 2 (0, 1)
n = 2, then x = 4 (00, 01, 10, 11)
n = 3, then x = 8 (000,001,010 ...111)

n = j, then x = 2j
From this we can conclude that if we are given (x) elements of information to code into
binary coded format, the number of bits (n) needed can be found as follows:
2n>x
or n > log2x
> 3.32 log10x

where j is the number of bits in a code word.


For example, if we want to code alphanumeric information (26 alphabetic characters + 10
decimals digits = 36 elements of information), we require

n > 3.32 log1036

n > 5.16 bits

13
Since bits are not defined as fractional parts, we take n = 6. In other words a minimum
six-bit code would be required to code 36 alphanumeric elements of information. In this
section we consider a few commonly used codes.

2.8.1. Binary Coded Decimal (BCD)


There are ten different symbols in the decimal number system:
0, 1, 2, … 9. As there are ten symbols we require at least four
bits to represent them in the binary form. Such a representation
of decimal numbers is called binary coded decimal (BCD)
numbers. Only ten combinations are utilized (out of 16) to
represent the decimal digits. The remaining six combinations
are illegal. However, they may be utilized for error detection
purposes. The BCD equivalent of a decimal number is written
by replacing each decimal digit in the integer and fractional
parts with its four-bit binary equivalent. Consider, for example,
the representation of the natural decimal number 16.85 in
Binary Coded Decimal code (BCD)
(16.85)10 = (0001 0110 . 1000 0101) BCD
1 6 . 8 5

2.8.2. Unit Distance Codes (gray Code)

There are many applications in which it is


desirable to have a code in which the adjacent
codes differ only in one bit. Such codes are called
Unit distance Codes. "Gray code" is the most
popular example of unit distance code. The 3-bit
and 4-bit Gray codes are given in table 2.2.

The most popular use of Gray codes is in the


position sensing transducer known as shaft
encoder.

2.8.3. Alphanumeric Codes

When information to be encoded includes entities


other than numerical values, an expanded code is required. For example, alphabetic
characters (A, B, ....Z) and special operation symbols like +, /, *, (, ) and other special

14
commands are used in digital systems. Codes that include alphabetic characters are
commonly referred to as Alphanumeric Codes. However, we require adequate number of
bits to encode all the characters.

 ASCII code
The American Standard Code for Information Interchange (ASCII) code is the most
widely used alphanumeric code. ASCII codes are used to represent alphanumeric data in
computers, communications equipment and other related devices. The code was first
published as a standard in 1967. It is seven—bit code that represents 128 (2 7 ) characters
assigned to numbers, letters, punctuation marks, and the most special characters. It was
proposed by the American National Standards Institute (ANSI). Table.2.3 shows the full
list of ASCII code which includes all characters and special symbols represented by this
coding system.

The Extended Binary Coded Decimal Interchange Code (EBCDIC)

The Extended Binary Coded Decimal Interchange Code (EBCDIC) also known as
Extended ASCII Character Set consists of 128 additional characters, that is, 256
characters in total. The range from 128 through 255 represents additional special,
mathematical, graphic, and foreign characters. EBCDIC was developed by IBM.

15
 Unicode
As briefly mentioned in the earlier sections, encodings such as ASCII, EBCDIC and their
variants do not have a sufficient number of characters to be able to encode alphanumeric
data of all forms, scripts and languages. As a result, these encodings do not permit
multilingual computer processing. In addition, these encodings suffer from
incompatibility. Two different encodings may use the same number for two different
characters or different numbers for the same characters. Unicode, developed jointly by
the Unicode Consortium and the International Organization for Standardization (ISO), is
the most complete character encoding scheme that allows text of all forms and languages
to be encoded for use by computers. The recent version of Unicode uses 32-bit encoding.
This gives 4,294,967,296 ( about 4.3 billion) possible characters.

2.8.4. Seven-segment Display Code


The individual segments making up a 7-segment display are identified by letters as
indicated in figure 2.3.

There are two important types of 7-segment LED display. In a


common cathode display, the cathodes of all the LEDs are
joined together and the individual segments are illuminated by
HIGH voltages. In a common anode display, the anodes of all
the LEDs are joined together and the individual segments are
illuminated by connecting to a LOW voltage.

16
2.8.5. Error Detection and Correction Codes
When data is transmitted in digital form from one place to another through a transmission
channel/medium, some data bits may be lost or modified. This loss of data integrity
occurs due to a variety of electrical phenomena in the transmission channel.

Ideally, we have to have a mechanism of correcting the errors that occur. If this is not
possible or proves to be expensive, we would be the need to know if an error occurred. If
an occurrence of error is known, appropriate action, like retransmitting the data, can be
taken. One of the methods of improving data integrity is to encode the data in a suitable
manner. This encoding may be done for error correction or merely for error detection.

 Parity Check Code


A parity bit is an additional bit that can be added to a group of zeros and ones to make
the parity of the group odd or even. An odd parity exists when the total number of ones
including the parity bit is an odd number such as 1, 3 , 5 , and so on, and an even parity
exists when the total number of ones including the parity bit is an even number such as 2,
4 , 6, and so on. The parity bit is usually placed in front of the MSB. This extra bit will
allow detection of a single error in a given code word in which it is used, and is called the
'Parity Bit'. This parity bit can be added on an odd or even basis. The odd or even

17
designation of a code word may be determined by actual number of 1's in the data
(including the added parity bit) to which the parity bit is added.
For example, the S in ASCII code is
(S) = (1010011)ASCII S, when coded for odd parity, would be shown as
(S) = (11010011)ASCII with odd parity
In this encoded 'S' the number of 1's is five, which is odd. When S is encoded for even
parity
(S) = (01010011)ASCII with even parity.
In this case the coded word has even number (four) of ones. Thus the parity encoding
scheme is a simple one and requires only one extra bit. If the system is using even parity
and we find odd number of ones in the received data word we know that an error has
occurred. However, this scheme is meaningful only for single errors. If two bits in a data
word were received incorrectly the parity bit scheme will not detect the faults.
 Repetition Code
The repetition code makes use of repetitive transmission of each data bit in the bit stream.
In the case of threefold repetition, ‘1’ and ‘0’ would be transmitted as ‘111’ and ‘000’
respectively. If, in the received data bit stream, bits are examined in groups of three bits,
the occurrence of an error can be detected. In the case of single-bit errors, ‘1’ would be
received as 011 or 101 or 110 instead of 111, and a ‘0’ would be received as 100 or 010
or 001 instead of 000. In both cases, the code becomes self-correcting if the bit in the
majority is taken as the correct bit.

 Cyclic Redundancy Check Code


Cyclic redundancy checking is a method of checking for errors in data that has been
transmitted on a communications link. A CRC-enabled device calculates a short, fixed-
length binary sequence for each block of data and sends or stores them both together.
When a block is read or received the receiving end applies the same calculation to the
data and compares its result with the result appended by the sender. If they agree, the data
has been received successfully. If not, the sender can be notified to resend the block of
data.

18

You might also like