Coa Iat-2 QB Soln

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

COA IAT-2 QB SOLN

Module 3
Q1. Draw & explain six stage CPU instruction pipeline.
Ans-
1. A typical instruction cycle can be split into many sub cycles like Fetch instruction,
Decode instruction, Execute and Store. The instruction cycle and the corresponding
sub cycles are performed for each instruction. These sub cycles for different
instructions can thus be interleaved or in other words these sub cycles of many
instructions can be carried out simultaneously, resulting in reduced overall execution
time. This is called instruction pipelining.
2. The more are the stages in the pipeline, the more the throughput is of the CPU.
3. If the instruction processing is split into six phases, the pipelined CPU will have six
different stages for the execution of the sub phases.
4. The six stages are as follows:

• Fetch instruction (FI):


• Decode instruction ((DI):
• Calculate operand (CO):
• Fetch operands (FO):
• Execute Instruction (EI):
• Write operand (WO):
Fetch instruction: Instructions are fetched from the memory into a temporary buffer
before it gets executed.
Decode instruction: The instruction is decoded by the CPU so that the necessary op
codes and operands can be determined.
Calculate operand: Based on the addressing scheme used, either operands are
directly provided in the instruction or the effective address has to be calculated.
Fetch Operand: Once the address is calculated, the operands need to be fetched from
the address that was calculated. This is done in this phase.
Execute Instruction: The instruction can now be executed.
Write operand: Once the instruction is executed, the result from the execution needs
to be stored or written back in the memory.

Eg. The timing diagram of a six stage instruction pipeline is shown in Figure 6:

6. Assuming that the sub cycles of the instruction cycle take exactly the same time to
complete i.e. one clock cycle in this case.
7. In case the time required by each of the sub phase is not same appropriate delays need to
be introduced. From this timing diagram it is clear that the total execution time of 3
instructions in this 6 stages pipeline is 8-time units. The first instruction gets completed after
6 time unit, and thereafter in each time unit it completes one instruction. Without pipeline, the
total time required to complete 3 instructions would have been 18 (6 3) time units. Therefore,
there is a speed up in pipeline processing and the speed up is related to the number of stages.

Q2. Describe various pipelining hazards.


Ans-
1. Pipelining can efficiently increase the performance of a processor by overlapping
execution of instructions.
2. But the efficiency of the pipelining depends upon, how the problem encountered
during the implementation of pipelining is handled.
3. These problems are known as HAZARDS.

➢ Types of Hazards:
a. Structural Hazards (Resource Bound)
b. Control Hazards ( Pipelining Bubbles)
c. Data Hazards ( Data dependencies)
a. Structural Hazards (Resource Bound)
1. During the pipelining, the overlapped execution of instructions requires pipelining of
functional units and duplication of resources to allow all possible combinations of
instructions in the pipeline.
2. If some combination of instructions cannot be accommodated because of a resource
conflict, the machine is said to have a structural hazard.
Common instances of structural hazards arise when:
1) Some functional unit is not fully pipelined then a sequence of instructions using that un-
pipelined unit cannot proceed at the rate of one per clock cycle.
2) Some resource has not been duplicated enough to allow all combinations of instructions in
the pipeline to execute. This type of hazards occurs when two activities require the same
resource simultaneously
If machine has shared a single-memory pipeline for data and instructions. As a result, when
an instruction contains a datamemory reference (load-MEM), it will conflict with the
instruction reference for a later instruction (instr 3-IF):

b. Control Hazards (Pipelining Bubbles)


1. This type of hazard is caused by uncertainty of execution path, branch taken or not
taken.
2. It is a hazard that arises when an attempt is made to make a decision before condition
is evaluated.
3. It results when we branch to a new location in the program, invalidating everything we
have loaded in our pipeline.
4. Control hazard can cause a greater performance loss for pipeline than data hazards.
5. When a branch is executed, it may or may not change the PC (program counter) to
something other than its current value plus 4.
6. If a branch changes the PC to its target address, it is a taken branch; if it falls through,
it is not taken.
7. If instruction I is a taken branch, then the PC is normally not changed until the end of
MEM stage, after the completion of the address calculation and comparison.

c. Data Hazards (Data dependencies)


1. Data hazards occur when the pipeline changes the order of read/write accesses to
operands so that the order differs from the order seen by sequentially executing
instructions on the unpipelined machine.
2. Data hazards are also known as data dependency. Data dependency is the
condition in which the outcome of the current operation is dependent on the
outcome of a previous instruction that has not yet been executed to completion
because of the effect of the pipeline.
3. Data hazards arise because of the need to preserve the order of the execution of
instructions.
Data hazards are classified into three types
1. RAW- Read After Write (also known as True Data Dependency)
2. WAW-Write After Write (also known as Output Dependency)
3. WAR –Write After Read (also known as Anti Data Dependency)

1. RAW- Read After Write (also known as True Data Dependency)


➢ RAW hazard is most common hazard that occurs when a read operation takes place
after a write. The following example shows the possible RAW hazards:
o Add t1, A, B
o Sub t2, t1, C
➢ Solution: Internal forwarding (this can be used for all types of Data hazards)
➢ (j tries to read a source before i writes to it, so j incorrectly gets the old value)

2. WAW-Write After Write (also known as Output Dependency)


➢ WAW hazard is the hazard that occurs when a write operation follows another write
operation.
➢ This hazard is present only in pipelines that write in more than one pipe stage or allow
an instruction to proceed even when a previous instruction is stalled.
➢ This following example shows the possible WAW hazard:
o Add t1, A, B
o Sub t1, C, D
➢ Solution: The WAW hazards can be avoided by doing following changes in the
pipelining:
➢ Move write back for an ALU operation into the MEM stage, since the data value is
available be then By assuming that the data memory access takes place in 2 pipelining
stages.

3. WAR –Write After Read (also known as Anti Data Dependency)


➢ WAR hazard is the hazard that occurs when write operation follows read operation.
➢ j tries to write a destination before it is read by i, so the instruction i incorrectly gets
the new value.
➢ Solution: The WAR hazard can be avoided by internal forwarding and by modifying
pipeline architecture such that consecutive Write and Read occur after few clock
cycles

Q3. What is horizontal & vertical micro instruction?


Ans-
Horizontal micro instruction:
They are the instructions which has:-
▪ Long formats
▪ Ability to express a high degree of parallelism
▪ Little encoding of the control information
▪ Useful when higher operating speed is desired & when machine structure allows
parallel use of resources.
▪ Length ranges from 40 to 100 bits

Vertical micro instruction:


▪ They are the instructions which has:-
▪ Short formats
▪ Limited ability to express parallel microoperations
▪ Highly encoded schemes that uses compact codes to specify only a small number of
the control information
▪ Slower operating speeds.
▪ Length ranges from 16 to 40 bits
Q4. Explain the Flynn's Classification in detail.
Ans-
• Flynn classified architecture in terms of streams of data and instructions. The essence
of the idea is that computing activity in every machine is based on:
i. Stream of instructions, i.e., sequence of instructions executed by the machine.
ii. Stream of data, i.e., sequence of data including input, temporary or partial results
referred by the instructions.
• Further a machine may have a single or multiple streams of each kind. Based on
these, computer architecture are characterized by the multiplicity of the hardware to
serve instructions and data streams as follows:
a. SISD: Single Instruction and single data stream
b. SIMD: Single Instruction and Multiple Data Stream
c. MIMD: Multiple Instruction and Multiple Data Stream
d. MISD: Multiple Instruction and Single Data Stream

a. SISD: Single Instruction and single data stream


1. Single processor
2. Single instruction stream
3. Data stored in single memory
4. Uni-processor
b. SIMD: Single Instruction and Multiple Data Stream
1. Single machine instruction
2. Controls simultaneous execution
3. Number of processing elements
4. Lockstep basis
5. Each processing element has associated data memory
6. Each instruction executed on different set of data by different processors
7. Vector and array processors

c. MIMD: Multiple Instruction and Multiple Data Stream


1. Set of processors
2. Simultaneously execute different instruction sequences
3. Different sets of data
4. SMPs, clusters and NUMA systems
d. MISD: Multiple Instruction and Single Data Stream
1. Sequence of data
2. Transmitted to set of processors
3. Each processor executes different instruction sequence
4. Never been implemented

Q5. Explain the following terms micro-instruction, micro-operation, micro-program.


Ans-
Microinstructions, micro-operations, and micro-programs are terms used in the field of
computer architecture and design to refer to the low-level instructions and operations that a
computer uses to execute instructions.

1. Micro-instruction
A microinstruction is a low-level instruction that a computer's control unit uses to execute
machine instructions. Microinstructions are typically executed very quickly and are used to
perform simple operations such as incrementing a register or fetching data from memory.

2. Micro-operation
A micro-operation, also known as a micro-op, is a low-level operation that a computer's
processor performs as part of executing a single machine instruction. Micro-operations are
performed by the processor's arithmetic and logic units (ALUs) and can include operations
such as addition, subtraction, bitwise operations, and comparisons.

3. Micro-program
A micro-program is a low-level program that defines the sequence of microinstructions that a
computer's control unit uses to execute machine instructions. Micro-programs are stored in
read-only memory (ROM) and are used to control the behavior of the computer's control unit.

Module 4
Q6. Discuss the floating-point representation IEEE standard 754 with examples.
Ans-
Floating-point representation IEEE standard 754 must be:
➢ Standard for floating point storage
➢ 32 and 64 bit double format,
➢ 8 and 11 bit exponent respectively
➢ Extended formats (both mantissa and exponent) for intermediate results
Example 1: Represent (178.1875)10 in single & double precision floating point format.
1. Convert the given number in binary format.
(178.1875)10 = (10110010.0011)2
2. Normalization:
10110010.0011=1.01100100011 X 2^7
3. Represent in Single precision format:
(1.N).2^(E-127)
4. Bias for single precision format = 127
127+7=134=(10000110)2
Sign Exponent Mantissa
0 10000110 01100100011
5.Represent in double precision format:
(1.N).2^(E-1023)
6. Bias for double precision format = 1023
1023+7=1030=(10000000110)2
Sign Exponent Mantissa
0 10000000110 01100100011

Q7. Examples on Booth’s Multiplication, Restoring & Non-restoring Division Algorithms.


Ans- Refer the Data Representation Algorithm ppt

Q8. Examples on IEEE 754 single and double precision standard of floating-point
representation.
Ans- Refer the Data Representation Algorithm ppt
Module 5
Q9. Classify the types of memories based on the hierarchy of speed and size.
Ans-
Memory Hierarchy, in Computer System Design, is an enhancement that helps in organising
the memory so that it can actually minimise the access time. The development of the Memory
Hierarchy occurred on a behaviour of a program known as locality of references. Here is a
figure that demonstrates the various levels of memory hierarchy clearly:
Memory Hierarchy Design
This Hierarchy Design of Memory is divided into two main types. They are:

a. External or Secondary Memory


It consists of Magnetic Tape, Optical Disk, Magnetic Disk, i.e. it includes peripheral storage
devices that are accessible by the system’s processor via I/O Module.

b. Internal Memory or Primary Memory


It consists of CPU registers, Cache Memory, and Main Memory. It is accessible directly by
the processor.

Q10. Explain the key characteristics of computer memory systems in detail.


Ans-

1. Location of a Memory
• CPU Memory – Cache memory
• Internal Memory – Main / Primary memory
• External Memory – Peripheral storage devices such as disk & tape
2. Capacity (Size)
• Expressed in terms of bytes or words
• For internal memory- 8,16,32 bits etc.
• For external memory – kilo-bytes, megabytes etc.
3. Unit of Transfer
• For main memory, the unit of transfer is the number of bits read out of or written into
memory at a time.
• For external memory, data are transferred in much larger units than a word referred as
blocks.

4. Physical Characteristics
a.Semiconductor memory
• Main or primary memory
• In the form of IC
• RAM, ROM,DRAM, SRAM
b.Magnetic memory
• Secondary memory
• Disks & tapes

5. Performance

6. Access Methods
Q11. Explain cache coherence.
Ans-
1. Two copies of same data –one in cache & another in main memory may become
different. This data inconsistency is called cache coherence problem.
2. Cache updating systems eliminates this problem by cache write operations.
Coherency with Multiple Caches
• Bus Watching with write through
o mark a block as invalid when another cache writes back that block, or
o update cache block in parallel with memory write
• Hardware transparency (all caches are updated simultaneously)
• I/O must access main memory through cache or update cache(s)
• Multiple Processors & I/O only access non-cacheable memory blocks

Q12. What is locality of reference? List and define different types of locality.
Ans-
a. Program contains loops, procedures that repeatedly call each other.
b. Many instructions are executed repeatedly during some time period & remainder of
program is accessed infrequently.
c. This is referred as Locality of Reference.
d. Two aspects: temporal & spatial
Temporal Locality - Temporal means that a recently executed instruction is likely to be
executed again very soon.
Spatial Locality - Spatial means that instructions stored near by to the recently executed
instruction are also likely to be executed soon.

Q13. Discuss cache organization.


Ans- Refer the Memory Organization ppt from slide 27-67 (Anything will be asked)
Module 6
Q14. Discuss the working of:-
(i)DMA
Ans-
o CPU tells DMA controller:-
▪ Read/Write
▪ Device address
o Starting address of memory block for data
o Amount of data to be transferred
o CPU carries on with other work
o DMA controller deals with transfer
o DMA controller sends interrupt when finished

(ii) Programmed I/O


Ans-
• CPU requests I/O operation
• I/O module performs operation
• I/O module sets status bits
• CPU checks status bits periodically
• I/O module does not inform CPU directly
• I/O module does not interrupt CPU CPU may wait or come back later
(iii) Interrupt Driven I/O
Ans-
• CPU issues read command
• I/O module gets data from peripheral while CPU does other work
• I/O module interrupts CPU
• CPU requests data
• I/O module transfers data
(iv) I/O Module
• CPU checks I/O module device status
• I/O module returns status If ready, CPU requests data transfer
• I/O module gets data from device
• I/O module transfers data to CPU Variations for output, DMA, etc.

Q15. Compare and contrast DMA, Programmed I/O and Interrupt Driven I/O.
Ans-

• DMA
➢ Interrupt driven and programmed I/O require active CPU intervention
➢ Transfer rate is limited
➢ CPU is tied up DMA is the answer
➢ Additional Module (hardware) on bus
➢ DMA controller takes over from CPU for I/O

You might also like