0% found this document useful (0 votes)
78 views11 pages

Computer Architecture - 2marks: 1) What Is The Need For Speculation? (NOV/DEC 2014)

The document discusses various topics related to computer architecture and pipelines. It covers concepts like speculation, exceptions, R-type instructions, branch prediction buffers, MIPS instructions, data hazards in pipelines, pipeline speedup, pipeline characteristics, pipeline stalls, sign extension, hazards and their types, advantages of pipelines, edge triggered clocking, pipeline bubbles, data paths, control signals, branch prediction, Flynn's classification, multithreading approaches, loop unrolling, multi-core processors, simultaneous multithreading, anti-dependencies, parallel processing challenges, UMA vs NUMA architectures, strong vs weak scaling, and fine-grained vs coarse-grained multithreading.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views11 pages

Computer Architecture - 2marks: 1) What Is The Need For Speculation? (NOV/DEC 2014)

The document discusses various topics related to computer architecture and pipelines. It covers concepts like speculation, exceptions, R-type instructions, branch prediction buffers, MIPS instructions, data hazards in pipelines, pipeline speedup, pipeline characteristics, pipeline stalls, sign extension, hazards and their types, advantages of pipelines, edge triggered clocking, pipeline bubbles, data paths, control signals, branch prediction, Flynn's classification, multithreading approaches, loop unrolling, multi-core processors, simultaneous multithreading, anti-dependencies, parallel processing challenges, UMA vs NUMA architectures, strong vs weak scaling, and fine-grained vs coarse-grained multithreading.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

COMPUTER ARCHITECTURE -2MARKS

Unit -3

1) What is the need for speculation?(NOV/DEC 2014)


Speculation is a process implemented is explicitly parallel instruction
computing processor and their complier to reduce processor memory
exchanging bottlenecks.2
2) What is exception?(MAY/JUNE 2016,NOV/DEC 2014)
Exception means that unexcepted change in controlflow.
Consider one form of control hazards
Add $1,$2,$1; causing an arithmetic over flow
SW $3,400($1);
Add $5,$1,$2;
3) What is R-type instruction?(APRIL/MAY 2015)
The operation of the datapath for an R-type instruction such as add
$t1,$t2,$t3. Although everthing occurs 1 clock cycle, we can think of four
steps to execute instruction.
The instruction is fetched and the PC is increments
Two registers $t2 and $t3 are read from the register file , and the
main control unit computers the setting of the control lines during
this step also.
The ALU operates on the data read from the register file using the
function code to generate the ALU function.
The result from the ALU is written into the register file using bits
15:11 of the instruction to select the destination register ($t1).
4) What is branch prediction buffer?(APRIL/MAY 2015)
A branch prediction buffer is a small memory indexed by the lower
portion of the address of the branch instruction.
The memory contains a bit that says whether the branch was taken or not
1 bit prediction scheme
2 bit prediction scheme
5) What are the major characterizes of MIPS instruction?
Load /store architecture
General purpose register machine
ALU operation have 3 register operand
Simple instruction set
Uniform encoding
COMPUTER ARCHITECTURE -2MARKS

Design for pipelining efficiency, including a fixed instruction set


encoding.
6) What is mean by data hazards in pipeline?
Data hazards occur when the pipeline must be stalled because one step
must wait for another to complete. Instruction depends on the result of
prior instruction still in the pipeline.
Add $s0,$t0,$t1
Sub $t2,$s0,$t3
Types:
RAW (read after write ) hazards
WAW(write after Write ) hazards.
WAR(write after read ) hazards.
7) Define pipeline speed up?
If the stages are perfectly balanced then the time between instruction on
the pipelined processor assuming ideal condition is equal to
Time between instruction pipelined = time between instructions non pipelined/
number of pipeline stages.
8) What are the major characteristics of a pipeline?
The major characteristics of a pipeline are:
Pipelining cannot be implemented on a single task, as it works by
splitting multiple tasks into a number of subtasks and operating on them
simultaneously.
The speedup or efficiency achieved by suing a pipeline depends on the
number of
pipe stages and the number of available tasks that can be subdivided
9) what is pipe line stall?
Hazards in pipeline can make pipeline stall
Eliminating a hazard often requires that some instructions in
the pipeline to be allowed to proceed while others are
delayed.
10) Define sign extend?
The immediate is added to a 32 bit operand to create an operand .since
the immediate may be + or -, then to add it to be 32 bit argument , we
must sign extend it to 32- bit. The function is performed by sign
extension.
11) What is meant by hazards and its type?(APRIL/MAY
2017,NOV/DEC 2015)
COMPUTER ARCHITECTURE -2MARKS

Hazards reduce the performance from the idea speedup gained by


pipeline .
Structural hazards- hardware cannot support all possible
combination of overlapping instruction.
Data hazards- when the instruction depends on the result of the
previous instruction.
Control hazards: Arise from the pipeline of branch and other
instruction that change the PC.
12) What are the advantage of pipeline?(MAY/JUNE 2016)
Reduce the execution time for the program.
In pipeline, multiple instruction are overlapped in execution
because of increase the performance of the computer.
13) What edge triggered clocking methodology?
An edge triggered clocking methodology means that any value
stored in a sequential logic element are updated only on a clock cycle.
14) What is meant by pipeline bubble? (NOV/DEC 2016)
15) What is a data path?(NOV/DEC 2016)
16) Name the control signals required to perform arithmetic operations.
(APRIL/MAY 2017)
17) What is meant by branch prediction? (NOV/DEC 2015)

Unit IV
1) What is flynns classification?(NOV/DEC 2014)
Flynn uses the stream concept for describing a machines structure.
A stream simply means a sequence of item
Single instruction stream ,single data streams(SISD)
Single instruction stream , multiply data streams(SIMD)
Multiple instruction stream ,single data stream(MISD)
Multiple instruction streams, multiple data streams(MIMD)
2) Explain about multithreading?(NOV/DEC 2016,NOV/DEC 2014)
Multithreading is higher level parallelism is called thread level
parallelism (TLP) because it is logically structure as separate threads
of execution.
A thread is a separate process with its own instruction and data.
Thread: thread process with own instructions and data.

Approaches to multithreading
COMPUTER ARCHITECTURE -2MARKS

a) Fine grained multithreading


b) Coarse grained multithreading
3) What is meant by loop unrolling and loop level
parallelism?(NOV/DEC 2015)
Loop parallelism:
The simplest and most common way to increase the ILP is to exploit
parallelism among iterations of a loop . This type of parallelism is
often called LOOP LEVEL PARALLEIISM.
Example: for(i=1;i<1000;i++)
X[i]=x[i]+y[i];
Loop unrolling:
Either the compiler or the hardware is able to exploit the parallelism
inherent in the loop.
4) What is multi core micro processor?
A multi core design takes several processor cores and packages them
as single processor. The goal is to enable system to run more tasks
simultaneously and thereby achieve greater overall system
performance.
5) Define SMT?
o Simultaneous multithreading (SMT) is a variation on
multithreading that uses the resources of a multiple-issue,
dynamically scheduled processor to exploit TLP at the same time it
exploits ILP.
o SMT also increases the hardware design flexibility.
o A simultaneous multithreading architecture is superior in
performance to a multiple issue multiprocessor(multi-issue CMP)
6) What is anti dependency?
An anti dependence between instruction i and instruction j occurs
when the instruction j writes a register or memory location that
instruction i reads.

7) What is a parallel processing challenge?


Parallel processing challenges are explained with Amdahls
law.
Two important hurdles in parallel processing, which are
difficult or easy is determined by the application and by the
architecture.
COMPUTER ARCHITECTURE -2MARKS

First major challenge: in parallel processing arises with the


limited parallelism available in program.
Second major challenge: its involves the large latency of
remote access in a parallel processor.
8) Compare UMA and NUMA multiprocessors?(APRIL/MAY 2015)
Non-uniform memory access (NUMA) is a computer
memory design used in multiprocessing, where the memory access
time depends on the memory location relative to the processor). The
benefits of NUMA are limited to particular workloads, notably on
servers where the data are often associated strongly with certain tasks
or users.
Uniform memory access (UMA) is a shared memory architecture
used in parallel computers. All the processors in the UMA model
share the physical memory uniformly. In a UMA architecture, access
time to a memory location is independent of which processor makes
the request or which memory chip contains the transferred data.
9) Differentiate b/w storing scaling and weak scaling?(APRIL/MAY
2015)
There are two basic ways to measure the parallel performance of a
given application ,depending on whether or not is CPU bound or
memory bound. These are referred to a strong scaling and week scaling.
Strong scaling problem size stays fixed but the number of
processing element are increased.This is used as justification fo
program that take a long time to run.
Weak scaling the problem size assigned to each processing
elements stays constant and additional element are used to solve
a larger total problem.
10) Explain fine and coarse grained multithreading?
Fine grained multithreading:(MAY/JUNE 2016)
Fine grained multithreading switches between threads on each
instruction, causing the execution of multiple thread to be
interleaved.
This interleaving is often done in round robin fashion, skipping
any threads that stalled at that time.
Coarse grained multithreading:
Coarse grained multithreading switches thread only on costly
stalls, such as level 2 cache misses.
COMPUTER ARCHITECTURE -2MARKS

11) Difference between centralised and distributed multicore


processor?

Centralised:

Centralised shared memory architecture share a single centralised


memory, interconnect processor and memory by bus.
It is also called as UMA or symmetric architecture.

Distributed:

Each processor core shares the entire memory although the access time
to lock memory attached to the cores chip will be much faster than the
access time to remote memories.
It is also called as NUMA.
12) What is register renaming?
o This renaming can be more easily done for register operands.
o Register renaming can be done either statically by the compiler or
dynamically by the hadware.
13) Define parallel and dependent instruction?
Parallel:
If two instruction are parallel,they can execute simultaneously in the
pipeline datapath without causing any stall.
Dependent:
If two instruction are dependent ,they are not parallel and must be
executed in order.
14) What are the various types of dependency?
Data dependences
Name dependences
Control dependences.
15) State the need for Instruction Level parallelism?(MAY/JUNE
2016)
16) What is instruction level parallelism?(NOV/DEC
2016,APRIL/MAY 2017)
17) Distinguish between implicit multithreading and explicit
multithreading? (APRIL/MAY 2017)
18) Define a super scalar processor.(NOV/DEC 2015)
UnitV
COMPUTER ARCHITECTURE -2MARKS

1) What is the need to implement memory as a


hierarchy?(APRIL/MAY 2015)
The principle of locality by implementing the memory as a hierarchy.
A memory hierarchy is the structure of memory that uses multiple
levels of memories as the distance from the processor increases, the
size of the memories and access time both increases.
2) Define TLB?
To support the demand paging and virtual memory processor
has to access page table which is kept in the main memory.
To avoid the access time and degradation of performance, a
small portion of the page table is accommodated in the memory
management unit.
This portion is called translation look aside buffer.

3) What is hit rate and miss penalty?(NOV/DEC 2015)


HIT RATE: The hit rate or hit ratio , is the fraction of memory
accesses found in the upper level, it is often used as a measure of the
performance of the memory hierarchy.
MISS PENALTY: is the time to replace a block in the upper level
with the corresponding block from the lower level ,plus the time to
deliver the block to the processor.
4) Write the modes of operation available in DMA?
DMA controller registers that are accessed by the processor to initiate
transfer operation
They are four types of DMA register.
DMA address register
Word count register
Control register
Status register
5) Define two types of locality and explain it?
The principle of locality states that program access a relatively small
portion of their address apace at any instant time.
Temporal locality
Spatial locality
Temporal locality: if an item is referenced ,it will tend to be
referenced again soon
COMPUTER ARCHITECTURE -2MARKS

Spatial locality: if an item is referenced , item those address


are close by will tend to be referenced soon.
6) What are the various types of memory?(NOV/DEC 2015)
The primary memory technology used in the memory hierarchies.
1.DRAM 2. SRAM 3. Magnetic disk.

7) Differentiate SRAM AND DRAM?

SRAMS dont need to refresh and so the access time is very close to
the cycle time. SRAMS typically uses six transistors per bit to prevent
the information from being disturbed when read, SRAM needs only
minimal power to retain the charge in the standby mode. SRAM
designs are concerned with speed and capacity ,while in DRAM
designs the emphasis is on cost per bit and capacity. The capacity of
DRAMs is roughly 4 8 times that of SRAMs. The cycle time of
SRAMs is 8 16 times faster than DRAMs, but they are also 8 16 times
as expensive.
8) Define virtual memory and page fault?
Virtual memory:
In the modern computer , the operating system moves program and
data automatically between the main memory and secondary memory
storage. Techniques that automatically swaps program and data blocks
between main memory and secondary memory storage device are
called virtual memory.
Page fault:
The required page is not found in the main memory is called page
fault.

9) Differentiate programmed I/O and interrupt I/O ?(NOV/DEC


2014)
Programmed I/O:
I/O operations will mean a data transfer between an I/O device and
memory or between an I/O device and the processor. If in any
computer system I/O operations are completely controlled by the
processor, then that system is said to be using programmed I/O.
Interrupt I/O:
COMPUTER ARCHITECTURE -2MARKS

External asynchronous input is used to tell the processor that I/O


device needs its service and hence processor does not have to check
whether I/O device needs it service or not.

10) What is the purpose of dirty /modified bit in cache


memory?(NOV/DEC 2014)
This cache structure is called direct mapped, since each memory
location is mapped directly to exactly one location in the cache .The
typical mapping between addresses and cache locations for direct-
mapped cache is usually simple.
(block address) modulo(number of cache blocks in the cache)
11) What are the advantage of virtual memory?(MAY/JUNE 2016)

Advantages :
Allocating memory is easy and cheap
Any free page is ok, OS can take first one out of list it keeps
Eliminates external fragmentation
Data (page frames) can be scattered all over PM.

12) Define Memory hierarchy?(MAY/JUNE 2016)


The principle of locality by implementing the memory of a
computer as memory hierarchy.
A memory hierarchy can consists of multiple levels but data
is copied between only two adjacent levels at a time.

13)What is meant by address mapping?(NOV/DEC 2016)


Address translation also called address mapping is the process by
which a virtual address is mapped to an address used to access
memory.
14)What is cache memory?(NOV/DEC 2016)
The program and data which is currently being executed is accessed
from processor and its added between the processor and main
memory to speed up the execution process is called cache memory.
15)Define memory interleaving.(APRIL/MAY 2017)
16)Summarize the sequence of events involved in handling an interrupt
request from a single device. (APRIL/MAY 2017)
17)Point out how DMA can improve I/O speed.(APRIL/MAY 2015)
COMPUTER ARCHITECTURE -2MARKS
COMPUTER ARCHITECTURE -2MARKS

You might also like