0% found this document useful (0 votes)
137 views16 pages

Chandani Assignment Materials

IEEE 754 is a technical standard for floating-point arithmetic that specifies formats, operations, and interchange formats. It aims to ensure consistency in floating-point behavior across programming environments and computer architectures. The standard defines formats for representing floating-point numbers, including binary floating-point and decimal floating-point. It also defines operations like addition, multiplication, division, and square root. Adhering to the standard helps ensure portability of numeric programs across platforms with different floating-point capabilities.

Uploaded by

Adv Sunil Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views16 pages

Chandani Assignment Materials

IEEE 754 is a technical standard for floating-point arithmetic that specifies formats, operations, and interchange formats. It aims to ensure consistency in floating-point behavior across programming environments and computer architectures. The standard defines formats for representing floating-point numbers, including binary floating-point and decimal floating-point. It also defines operations like addition, multiplication, division, and square root. Adhering to the standard helps ensure portability of numeric programs across platforms with different floating-point capabilities.

Uploaded by

Adv Sunil Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Q-1(b)

(iii)

Ans :

setlocale() is a big no in library code, and a plain setlocale(LC_ALL, "") can break important things
even in applications, depending on which country the user is in, the most severe being LC_NUMERIC,
which essentially breaks all parsing/formatting of floats randomly. I'm not sure why you say setting the
local is important for text. If you do serious internationalized text processing, you'll most likely have to
use special code anyway (using ICU is popular). Many of the POSIX functions can't even be fixed to
handle all the internationalization trickiness because their interface is broken by design. And if you
only want to do simple things like concatenating UTF-8 strings, even a libcs which maps the C locale
to plain ASCII will behave correctly.
And setlocale(LC_ALL, "") is really out of the question; I could understand if you'd set an UTF-8 locale.
(Although I'm not sure if POSIX requires an UTF-8 locale to exist.)
So my stance is: setting locale doesn't help you, but may break things.

(e)

Ans :

BCD stand for binary coded decimal. Suppose, we have two 4-bit numbers A and B. The value
of A and B can varies from 0(0000 in binary) to 9(1001 in binary) because we are considering
decimal numbers.
The output will varies from 0 to 18, if we are not considering the carry from the previous sum.
But if we are considering the carry, then the maximum value of output will be 19 (i.e. 9+9+1 =
19).
When we are simply adding A and B, then we get the binary sum. Here, to get the output in BCD
form, we will use BCD Adder.

Example 1:
Input :
A = 1111 B = 1100
Output :
Y = 1 1011

Explanation: We are adding A(=15) and B(=12).


The value of binary sum will be 11011.
But the BCD sum will be 1 1011,
where 2 is 0010 in binary and 7 is 0111 in binary.

Note – If the sum of two number is less then or equal to 9, then the value of BCD sum and
binary sum will be same otherwise they will differ by 6(0110 in binary).
Now, lets move to the table and find out the logic when we are going to add “0110”.
We are adding “0110” (=6) only to the second half of the table.
The conditions are:
1. If C’ = 1 (Satisfies 16-19)
2. If S3′.S2′ = 1 (Satisfies 12-15)
3. If S3′.S1′ = 1 (Satisfies 10 and 11)
So, our logic is
C' + S3'.S2' + S3'.S1' = 1
Implementation :
(g)

Ans :

Hamming code is a set of error-correction codes that can be used to detect and correct the
errors that can occur when the data is moved or stored from the sender to the receiver. It
is technique developed by R.W. Hamming for error correction.
Redundant bits –
Redundant bits are extra binary bits that are generated and added to the information-carrying bits
of data transfer to ensure that no bits were lost during the data transfer.
The number of redundant bits can be calculated using the following formula:

2^r ≥ m + r + 1
where, r = redundant bit, m = data bit
Suppose the number of data bits is 7, then the number of redundant bits can be calculated using:
= 2^4 ≥ 7 + 4 + 1
Thus, the number of redundant bits= 4
Parity bits –
A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s in the
data are even or odd. Parity bits are used for error detection. There are two types of parity bits:
1. Even parity bit:
In the case of even parity, for a given set of bits, the number of 1’s are counted. If that
count is odd, the parity bit value is set to 1, making the total count of occurrences of 1’s an
even number. If the total number of 1’s in a given set of bits is already even, the parity bit’s
value is 0.
2. Odd Parity bit –
In the case of odd parity, for a given set of bits, the number of 1’s are counted. If that count
is even, the parity bit value is set to 1, making the total count of occurrences of 1’s an odd
number. If the total number of 1’s in a given set of bits is already odd, the parity bit’s value
is 0.
General Algorithm of Hamming code –
The Hamming Code is simply the use of extra parity bits to allow the identification of an error.
1. Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
2. All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
3. All the other bit positions are marked as data bits.
4. Each data bit is included in a unique set of parity bits, as determined its bit position in
binary form.
a. Parity bit 1 covers all the bits positions whose binary representation includes a 1 in the
least significant
position (1, 3, 5, 7, 9, 11, etc).
b. Parity bit 2 covers all the bits positions whose binary representation includes a 1 in the
second position from
the least significant bit (2, 3, 6, 7, 10, 11, etc).
c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in the
third position from
the least significant bit (4–7, 12–15, 20–23, etc).
d. Parity bit 8 covers all the bits positions whose binary representation includes a 1 in the
fourth position from
the least significant bit bits (8–15, 24–31, 40–47, etc).
e. In general each parity bit covers all bits where the bitwise AND of the parity position
and the bit position is
non-zero.
5. Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is
odd.
6. Set a parity bit to 0 if the total number of ones in the positions it checks is even.
Determining the position of redundant bits –
These redundancy bits are placed at the positions which correspond to the power of 2.
As in the above example:
1. The number of data bits = 7
2. The number of redundant bits = 4
3. The total number of bits = 11
4. The redundant bits are placed at positions corresponding to power of 2- 1, 2, 4, and 8
Suppose the data to be transmitted is 1011001, the bits will be placed as follows:

Determining the Parity bits –


1. R1 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the least significant position.
R1: bits 1, 3, 5, 7, 9, 11

To find the redundant bit R1, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R1 is an even number the value of R1 (parity bit’s value)
=0
2. R2 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the second position from the least significant bit.
R2: bits 2,3,6,7,10,11
To find the redundant bit R2, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R2 is an odd number the value of R2(parity bit’s
value)=1
3. R4 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the third position from the least significant bit.
R4: bits 4, 5, 6, 7

To find the redundant bit R4, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R4 is an odd number the value of R4(parity bit’s value) =
1
4. R8 bit is calculated using parity check at all the bits positions whose binary representation
includes a 1 in the fourth position from the least significant bit.
R8: bit 8,9,10,11
To find the redundant bit R8, we check for even parity. Since the total number of 1’s in all
the bit positions corresponding to R8 is an even number the value of R8(parity bit’s
value)=0.

Thus, the data transferred is:

Error detection and correction –


Suppose in the above example the 6th bit is changed from 0 to 1 during data transmission, then it
gives new parity values in the binary number:

The bits give the binary number as 0110 whose decimal representation is 6. Thus, the bit 6
contains an error. To correct the error the 6th bit is changed from 1 to 0.

(i)
Ans:
Counters are of two types depending upon clock pulse applied. These counters are:
Asynchronous counter, and Synchronous counter.
In Asynchronous Counter is also known as Ripple Counter, different flip flops are triggered
with different clock, not simultaneously. While in Synchronous Counter, all flip flops are
triggered with same clock simultaneously and Synchronous Counter is faster than asynchronous
counter in operation.
Let’s see the difference between these two counters:

S.NO SYNCHRONOUS COUNTER ASYNCHRONOUS COUNTER


In synchronous counter, all flip flops In asynchronous counter, different

are triggered with same clock flip flops are triggered with

1. simultaneously. different clock, not simultaneously.

Asynchronous Counter is slower

Synchronous Counter is faster than than synchronous counter in

2. asynchronous counter in operation. operation.

Synchronous Counter does not Asynchronous Counter produces

3. produce any decoding errors. decoding error.

Synchronous Counter is also called Asynchronous Counter is also

4. Serial Counter. called Parallel Counter.

Synchronous Counter designing as Asynchronous Counter designing

well implementation are complex due as well as implementation is very

5. to increasing the number of states. easy.

Asynchronous Counter will operate

Synchronous Counter will operate in only in fixed count sequence

6. any desired count sequence. (UP/DOWN).

Asynchronous Counter examples

Synchronous Counter examples are: Ripple UP counter, Ripple

7. are: Ring counter, Johnson counter. DOWN counter.


(j)
Ans :
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-
point computation which was established in 1985 by the Institute of Electrical and Electronics
Engineers (IEEE). The standard addressed many problems found in the diverse floating point
implementations that made them difficult to use reliably and reduced their portability. IEEE
Standard 754 floating point is the most common representation today for real numbers on
computers, including Intel-based PC’s, Macs, and most Unix platforms.
There are several ways to represent floating point number but IEEE 754 is the most efficient in
most cases. IEEE 754 has 3 basic components:
1. The Sign of Mantissa –
This is as simple as the name. 0 represents a positive number while 1 represents a negative
number.
2. The Biased exponent –
The exponent field needs to represent both positive and negative exponents. A bias is added
to the actual exponent in order to get the stored exponent.
3. The Normalised Mantisa –
The mantissa is part of a number in scientific notation or a floating-point number,
consisting of its significant digits. Here we have only 2 digits, i.e. O and 1. So a normalised
mantissa is one with only one 1 to the left of the decimal.
IEEE 754 numbers are divided into two based on the above three components: single
precision and double precision.
TYPES SIGN BIASED EXPONENT NORMALISED MANTISA BIAS

Single precision 1(31st bit) 8(30-23) 23(22-0) 127

Double precision 1(63rd bit) 11(62-52) 52(52-0) 1023

Example –
85.125
85 = 1010101
0.125 = 001
85.125 = 1010101.001
=1.010101001 x 2^6
sign = 0
1. Single precision:
biased exponent 127+6=133
133 = 10000101
Normalised mantisa = 010101001
we will add 0's to complete the 23 bits

The IEEE 754 Single precision is:


= 0 10000101 01010100100000000000000
This can be written in hexadecimal form 42AA4000

2. Double precision:
biased exponent 1023+6=1029
1029 = 10000000101
Normalised mantisa = 010101001
we will add 0's to complete the 52 bits

The IEEE 754 Double precision is:


= 0 10000000101 0101010010000000000000000000000000000000000000000000
This can be written in hexadecimal form 4055480000000000
Special Values: IEEE has reserved some values that can ambiguity.
 Zero –
Zero is a special value denoted with an exponent and mantissa of 0. -0 and +0 are distinct
values, though they both are equal.
 Denormalised –
If the exponent is all zeros, but the mantissa is not then the value is a denormalized number.
This means this number does not have an assumed leading one before the binary point.
 Infinity –
The values +infinity and -infinity are denoted with an exponent of all ones and a mantissa
of all zeros. The sign bit distinguishes between negative infinity and positive infinity.
Operations with infinite values are well defined in IEEE.
 Not A Number (NAN) –
The value NAN is used to represent a value that is an error. This is represented when
exponent field is all ones with a zero sign bit or a mantissa that it not 1 followed by zeros.
This is a special value that might be used to denote a variable that doesn’t yet hold a value.

EXPONENT MANTISA VALUE

0 0 exact 0
EXPONENT MANTISA VALUE

255 0 Infinity

0 not 0 denormalised

255 not 0 Not a number (NAN)

Similar for Double precision (just replacing 255 by 2049), Ranges of Floating point numbers:

DENORMALIZED NORMALIZED APPROXIMATE DECIMAL

Single ± 2-149 to (1 – 2- ± 2-126 to (2 – 2- ± approximately 10-44.85 to

Precision 23)×2-126 23)×2127 approximately 1038.53

Double ± 2-1074 to (1 – 2- ± 2-1022 to (2 – ± approximately 10-323.3 to

Precision 52)×2-1022 2-52)×21023 approximately 10308.3

The range of positive floating point numbers can be split into normalized numbers, and
denormalized numbers which use only a portion of the fractions’s precision. Since every
floating-point number has a corresponding, negated value, the ranges above are symmetric
around zero.
There are five distinct numerical ranges that single-precision floating-point numbers are not able
to represent with the scheme presented so far:
1. Negative numbers less than – (2 – 2-23) × 2127 (negative overflow)
2. Negative numbers greater than – 2-149 (negative underflow)
3. Zero
4. Positive numbers less than 2-149 (positive underflow)
5. Positive numbers greater than (2 – 2-23) × 2127 (positive overflow)
Overflow generally means that values have grown too large to be represented. Underflow is a
less serious problem because is just denotes a loss of precision, which is guaranteed to be closely
approximated by zero.
Table of the total effective range of finite IEEE floating-point numbers is shown below:
BINARY DECIMAL

Single ± (2 – 2-23) × 2127 approximately ± 1038.53

Double ± (2 – 2-52 ) × 21023


approximately ± 10308.25

Special Operations –
OPERATION RESULT

n ÷ ±Infinity 0

±Infinity × ±Infinity ±Infinity

±nonZero ÷ ±0 ±Infinity

±finite × ±Infinity ±Infinity

Infinity + Infinity

Infinity – -Infinity +Infinity

-Infinity – Infinity

-Infinity + – Infinity – Infinity

±0 ÷ ±0 NaN

±Infinity ÷ ±Infinity NaN

±Infinity × 0 NaN

NaN == NaN False

Q-2(B)

Ans :
⇒1 Block of Cache = 2 Words of RAM

⇒Memory location address 25 is equivalent to Block address 12.

⇒ Total number of possible Blocks in Main Memory = 64/2 = 32 blocks.

Associative Mapping:

The block can be anywhere in the cache.

Direct Mapping:

Size of Cache = 8 blocks

Location of Block 12 in Cache = 12 modulo 8 = 4

2 Way set associative mapping:

Number of blocks in a set = 2

Number of sets = Size of Cache in blocks / Number of blocks in a set

=8/2=4

Block 12 will be located anywhere in (12 modulo 4) set, that is set 0.

Read more on Brainly.in - https://brainly.in/question/1140487#readmore

You might also like