Advanced Computer Architecture
Advanced Computer Architecture
A d v a n c e C o mp u t e r A r c h i t e c t u r e
PART - A
UNIT - 1
6 hours
UNIT - 2
6 Hours
UNIT - 3
INSTRUCTION –LEVEL PARALLELISM – 1: ILP: Concepts and challenges; Basic
Compiler Techniques for exposing ILP; Reducing Branch costs with prediction;
Overcoming Data hazards with Dynamic scheduling; Hardware-based speculation.
7 Hours
UNIT - 4
INSTRUCTION –LEVEL PARALLELISM – 2: Exploiting ILP using multiple issue
and static scheduling; Exploiting ILP using dynamic scheduling, multiple issue and
speculation; Advanced Techniques for instruction delivery and Speculation; The Intel
Pentium 4 as example. 7 Hours
Page 1
Advance Computer Architecture 10CS74
PART - B
UNIT - 5
UNIT - 6
UNIT - 7
UNIT - 8
TEXT BOOK:
REFERENCE BOOKS:
Page 2
Advance Computer Architecture 10CS74
Table of Contents
Page 3
Advance Computer Architecture 10CS74
PART - A
UNIT - 1
Dependability
6 hours
Page 4
Advance Computer Architecture 10CS74
UNIT I
Introduction
Today’ s desktop computers (less than $500 cost) ar e having more
performance, larger memory and storage than a computer bought in 1085 for 1
million dollar. Highest performance microprocessors of today outperform
Supercomputers of less than 10 years ago. The rapid improvement has come both
from advances in the technology used to build computers and innovations made in
the computer design or in other words, the improvement made in the computers
can be attributed to innovations of technology and architecture design.
In the yearly 1980s, the Reduced Instruction Set Computer (RISC) based
machines focused the attention of designers on two critical performance techniques,
the exploitation Instruction Level Parallelism (ILP) and the use of caches. The figu
re 1.1 shows the growth in processor performance since the mid 1980s. The graph
plots performance relative to the VAX-11/780 as measured b y the SPECint
benchmarks. From the figure it is clear that architectural and organizational
enhancements led to 16 years of sustained growth in performance at an annual rate of
over 50%. Since 2002, processor performance improvement has dropped to about 20%
per year due to the following hurdles:
The hurdles signals historic switch from relying solely on ILP to Thread Level
Parallelism (TLP) and Data Level Parallelism (DLP).
Page 5
Advance Computer Architecture 10CS74
Classes of Computers
Page 6
Advance Computer Architecture 10CS74
Desktop computing
The first and still the largest market in dollar terms is desktop computing. Desktop
computing system cost range from $ 500 (low end) to $ 5000 (high-end
configuration). Throughout this range in price, the desktop market tends to drive to
optimize price- performance. The perf ormance concerned is compute performance
and graphics performance. The combination of performance and price are the
driving factors to the customers and the computer designer. Hence, the newest,
high performance and cost effective processor often appears first in desktop computers.
Servers:
Servers provide large-scale and reliable computing and file services and are
mainly used in the large-scale en terprise computing and web based services. The three
important
•Dependability: Severs must operate 24x7 hours a week. Failure of server system
is far more catastrophic than a failure of desktop. Enterprise will lose revenue if
the server is unavailable.
•Scalability: as the business grows, the server may have to provide more
functionality/ services. Thus ability to scale up the computin g capacity, memory,
storage and I/O bandwidth is crucial.
•Throughput: transactions completed per minute or web pages served per second
are crucial for servers.
Embedded Computers
Simple embedded microprocessors are seen in washing machines, printers,
network switches, handheld devices such as cell phones, smart cards video game
devices etc. embedded computers have the widest spread of processing power and
cost. The primary goal is often meeting the performance need at a minimum price
rather than achieving higher performance at a higher price. The other two characteristic
requirements are to minimize the memory and power.
Page 7
Advance Computer Architecture 10CS74
ISA refers to the actual programmer visible Instruction set. The ISA serves as
boundary between the software and hardware. Th e seven dimensions of the ISA are:
iii) Addressing modes:Specify the address of a M object apart from register and constant
operands.
MIPS Addressing modes:
•Register mode addressing
•Immediate mode addressing
•Displacement mode addressing
80x86 in addition to the above addressing modes supports the additional
modes of addressing:
i. Register Indirect
ii. Indexed
iii,Based with Scaled index
Page 8
Advance Computer Architecture 10CS74
IC Logic technology:
Transistor density increases by about 35%per year. Increase in die size
corresponds to about 10 % to 20% per year. The combined effect is a growth rate
in transistor count on a chip of about 40% to 55% per year. Semiconductor DRAM
technology:cap acity increases by about 40% per year.
Storage Technology:
Before 1990: the storage density increased by about 30% per year.
After 1990: the storage density increased by about 60 % per year.
Disks are still 50 to 100 times cheaper per bit than DRAM.
Page 9
Advance Computer Architecture 10CS74
Network Technology:
Network performance depends both on the per formance of the switches and
on the performance of the transmission system. Although the technology improves
continuously, the impact of these improvements can be in discrete leaps.
A simple rule of thumb is that bandwidth gro ws by at least the square of the
improvement in latency. Computer designers should make plans accordingly.
•IC Processes are characterizes by the f ature sizes.
•Feature sizes decreased from 10 microns(1971) to 0.09 microns(2006)
•Feature sizes shrink, devices shrink quadr atically.
•Shrink in vertical direction makes the operating v oltage of the transistor to
reduce.
•Transistor performance improves linearly with decreasing
feature size
Page 10
Advance Computer Architecture 10CS74
.
•Transistor count improves quadratically with a linear improvement in Transistor
performance.
•!!! Wire delay scales poo rly comp ared to Transistor performance.
•Feature sizes shrink, wires get shorter.
•Signal delay fo r a wire increases in proportion to the product of Resistance and
Capacitance.
•For a fix ed task, slowing clock rate (frequency switched) reduces power, but not energy
•Capacitive load a function of number of transistors connected to output and technology,
which determines capacitance of wires and transistors
Trends in Cost
• The underlying principle that drives the cost down is the learning curvemanufacturing
costs decrease over time.
• Volume is a second key factor in determining cost. Volume decreases cost since it
increases purchasing manufacturing efficiency. As a rule of thumb, the cost decreases
Page 11
Advance Computer Architecture 10CS74
Cost of IC = Cost of [die+ testing die+ Packaging and final test] / (Final test yoeld)
The number of dies per wafer is approximately the area of the wafer divided by the area
of the die.
The first term is the ratio of wafer area to die area and the second term compensates for
the rectangular dies near the periphery of round wafers(as shown in figure).
Dependability:
The Infrastructure providers offer Service Level Agreement (SLA) or Service
Level Objectives (SLO) to guarantee that their networking or power services would be
dependable.
Page 12
Advance Computer Architecture 10CS74
The two main measures of Dependability are Module Reliability and Module
Availability. Module reliability is a measure of continuous service accomplishment (or
time to failure) from a reference initial instant.
1. Mean Time To Failure (MTTF) measures Reliability
2. Failures In Time (FIT) = 1/MTTF, the rate of failures
• Traditionally reported as failures per billion hours of operation
• Mean Time To Repair (MTTR) measures Service Interruption
– Mean Time Between Failures (MTBF) = MTTF+MTTR
• Module availability measures service as alternate between the 2 states of
accomplishment and interruption (number between 0 and 1, e.g. 0.9)
• Module availability = MTTF / ( MTTF + MTTR)
Performance:
The Execution time or Response time is defined as the time between the start and
completion of an event. The total amount of work done in a given time is defined as the
Throughput.
Computer user says that computer is faster when a program runs in less time.
The routinely executed programs are the best candidates for evaluating the performance
of the new computers. To evaluate new system the user would simply compare the
execution time of their workloads.
Page 13
Advance Computer Architecture 10CS74
Benchmarks
The real applications are the best choice of benchmarks to evaluate the
performance. However, for many of the cases, the workloads will not be known at the
time of evaluation. Hence, the benchmark program which resemble the real applications
are chosen. The three types of benchmarks are:
• KERNELS, which are small, key pieces of real applications;
• Toy Programs: which are 100 line programs from beginning programming
assignments, such Quicksort etc.,
• Synthetic Benchmarks: Fake programs invented to try to match the profile and
behavior of real applications such as Dhrystone.
To make the process of evaluation a fair justice, the following points are to be followed.
• Source code modifications are not allowed.
• Source code modifications are allowed, but are essentially impossible.
• Source code modifications are allowed, as long as the modified version produces
the same output.
• To increase predictability, collections of benchmark applications, called
benchmark suites, are popular
• SPECCPU: popular desktop benchmark suite given by Standard Performance
Evaluation committee (SPEC)
– CPU only, split between integer and floating point programs
– SPECint2000 has 12 integer, SPECfp2000 has 14 integer programs
– SPECCPU2006 announced in Spring 2006.
SPECSFS (NFS file server) and SPECWeb (WebServer) added as server
benchmarks
Page 14
Advance Computer Architecture 10CS74
While designing the computer, the advantage of the following points can be
exploited to enhance the performance.
* Parallelism: is one of most important methods for improving performance.
- One of the simplest ways to do this is through pipelining ie, to over lap the
instruction Execution to reduce the total time to complete an instruction
sequence.
- Parallelism can also be exploited at the level of detailed digital design.
- Set- associative caches use multiple banks of memory that are typically searched
n parallel. Carry look ahead which uses parallelism to speed the process of
computing.
* Principle of locality: program tends to reuse data and instructions they have used
recently. The rule of thumb is that program spends 90 % of its execution time in only
10% of the code. With reasonable good accuracy, prediction can be made to find what
instruction and data the program will use in the near future based on its accesses in the
recent past.
* Focus on the common case while making a design trade off, favor the frequent case
over the infrequent case. This principle applies when determining how to spend
resources, since the impact of the improvement is higher if the occurrence is frequent.
Amdahl’s Law: Amdahl’s law is used to find the performance gain that can be obtained
by improving some portion or a functional unit of a computer Amdahl’s law defines the
speedup that can be gained by using a particular feature.
Speedup is the ratio of performance for entire task without using the enhancement
when possible to the performance for entire task without using the enhancement.
Execution time is the reciprocal of performance. Alternatively, speedup is defined as thee
ratio of execution time for entire task without using the enhancement to the execution
time for entair task using the enhancement when possible.
Speedup from some enhancement depends an two factors:
Page 15
Advance Computer Architecture 10CS74
i. The fraction of the computation time in the original computer that can be
converted to take advantage of the enhancement. Fraction enhanced is always less than or
equal to
Example: If 15 seconds of the execution time of a program that takes 50
seconds in total can use an enhancement, the fraction is 15/50 or 0.3
ii. The improvement gained by the enhanced execution mode; ie how much
faster the task would run if the enhanced mode were used for the entire program. Speedup
enhanced is the time of the original mode over the time of the enhanced mode and is always
greater then 1.
Processor is connected with a clock running at constant rate. These discrete time events
are called clock ticks or clock cycle.
CPU time for a program can be evaluated:
Page 16
Advance Computer Architecture 10CS74
Example:
A System contains Floating point (FP) and Floating Point Square Root (FPSQR) unit.
FPSQR is responsible for 20% of the execution time. One proposal is to enhance the
FPSQR hardware and speedup this operation by a factor of 15 second alternate is just to
try to make all FP instructions run faster by a factor of 1.6 times faster with the same
effort as required for the fast FPSQR, compare the two design alternative
Page 17
Advance Computer Architecture 10CS74
UNIT - 2
PIPELINING:
Introduction
Pipeline hazards
Implementation of pipeline
6 Hours
Page 18
Advance Computer Architecture 10CS74
UNIT II
Under these conditions, the speedup from pipelining is equal to the number of stage
pipeline. In practice, the pipeline stages are not perfectly balanced and pipeline does
involve some overhead. Therefore, the speedup will be always then practically less than
the number of stages of the pipeline. Pipeline yields a reduction in the average execution
time per instruction. If the processor is assumed to take one (long) clock cycle per
instruction, then pipelining decrease the clock cycle time. If the processor is assumed to
take multiple CPI, then pipelining will aid to reduce the CPI.
Decode the instruction and access the register file. Decoding is done in parallel
with reading registers, which is possible because the register specifies are at a fixed
location in a RISC architecture. This corresponds to fixed field decoding. In addition it
involves:
- Perform equality test on the register as they are read for a possible branch.
- Sign-extend the offset field of the instruction in case it is needed.
- Compute the possible branch target address.
Page 19
Advance Computer Architecture 10CS74
The ALU operates on the operands prepared in the previous cycle and performs
one of the following function defending on the instruction type.
* Register- Register ALU instruction: ALU performs the operation specified in the
instruction using the values read from the register file.
* Register- Immediate ALU instruction: ALU performs the operation specified in the
instruction using the first value read from the register file and that sign extended
immediate.
The execution of the instruction comprising of the above subtask can be pipelined. Each
of the clock cycles from the previous section becomes a pipe stage – a cycle in the
pipeline. A new instruction can be started on each clock cycle which results in the
execution pattern shown figure 2.1. Though each instruction takes 5 clock cycles to
complete, during each clock cycle the hardware will initiate a new instruction and will be
executing some part of the five different instructions as illustrated in figure 2.1.
Page 20
Advance Computer Architecture 10CS74
Each stage of the pipeline must be independent of the other stages. Also, two different
operations can’t be performed with the same data path resource on the same clock. For
example, a single ALU cannot be used to compute the effective address and perform a
subtract operation during the same clock cycle. An adder is to be provided in the stage 1
to compute new PC value and an ALU in the stage 3 to perform the arithmetic indicatedin
the instruction (See figure 2.2). Conflict should not arise out of overlap of instructions
using pipeline. In other words, functional unit of each stage need to be independent of
other functional unit. There are three observations due to which the risk of conflict is
reduced.
• Separate Instruction and data memories at the level of L1 cache eliminates a
conflict for a single memory that would arise between instruction fetch and data
access.
• Register file is accessed during two stages namely ID stage WB. Hardware
should allow to perform maximum two reads one write every clock cycle.
• To start a new instruction every cycle, it is necessary to increment and store the
PC every cycle.
Page 21
Advance Computer Architecture 10CS74
Buffers or registers are introduced between successive stages of the pipeline so that at the
end of a clock cycle the results from one stage are stored into a register (see figure 2.3).
During the next clock cycle, the next stage will use the content of these buffers as input.
Figure 2.4 visualizes the pipeline activity.
Page 22
Advance Computer Architecture 10CS74
Pipelining increases the CPU instruction throughput but, it does not reduce the
executiontime of an individual instruction. In fact, the pipelining increases the execution
time of each instruction due to overhead in the control of the pipeline. Pipeline overhead
arises from the combination of register delays and clock skew. Imbalance among the pipe
stages reduces the performance since the clock can run no faster than the time needed for
the slowest pipeline stage.
Page 23
Advance Computer Architecture 10CS74
Pipeline Hazards
Hazards may cause the pipeline to stall. When an instruction is stalled, all the
instructions issued later than the stalled instructions are also stalled. Instructions issued
earlier than the stalled instructions will continue in a normal way. No new instructions
are fetched during the stall. Hazard is situation that prevents the next instruction in the
instruction stream fromk executing during its designated clock cycle. Hazards will reduce
the pipeline performance.
A stall causes the pipeline performance to degrade from ideal performance. Performance
improvement from pipelining is obtained from:
Assume that,
i) cycle time overhead of pipeline is ignored
ii) stages are balanced
With theses assumptions
If all the instructions take the same number of cycles and is equal to the number of
pipeline stages or depth of the pipeline, then,
Page 24
Advance Computer Architecture 10CS74
Types of hazard
Three types hazards are:
1. Structural hazard
2. Data Hazard
3. Control Hazard
Structural hazard
Structural hazard arise from resource conflicts, when the hardware cannot support all
possible combination of instructions simultaneously in overlapped execution. If some
combination of instructions cannot be accommodated because of resource conflicts, the
processor is said to have structural hazard. Structural hazard will arise when some
functional unit is not fully pipelined or when some resource has not been duplicated
enough to allow all combination of instructions in the pipeline to execute. For example, if
memory is shared for data and instruction as a result, when an instruction contains data
memory reference, it will conflict with the instruction reference for a later instruction (as
shown in figure 2.5a). This will cause hazard and pipeline stalls for 1 clock cycle.
Page 25
Advance Computer Architecture 10CS74
Data Hazard
Consider the pipelined execution of the following instruction sequence (Timing diagram
shown in figure 2.6)
Page 26
Advance Computer Architecture 10CS74
DADD instruction produces the value of R1 in WB stage (Clock cycle 5) but the DSUB
instruction reads the value during its ID stage (clock cycle 3). This problem is called Data
Hazard. DSUB may read the wrong value if precautions are not taken. AND instruction
will read the register during clock cycle 4 and will receive the wrong results. The XOR
instruction operates properly, because its register read occurs in clock cycle 6 after
DADD writes in clock cycle 5. The OR instruction also operates without incurring a
hazard because the register file reads are performed in the second half of the cycle
whereas the writes are performed in the first half of the cycle.
The DADD instruction will produce the value of R! at the end of clock cycle 3. DSUB
instruction requires this value only during the clock cycle 4. If the result can be moved
from the pipeline register where the DADD store it to the point (input of LAU) where
DSUB needs it, then the need for a stall can be avoided. Using a simple hardware
technique called Data Forwarding or Bypassing or short circuiting, data can be made
available from the output of the ALU to the point where it is required (input of LAU) at
the beginning of immediate next clock cycle.
Forwarding works as follows:
i) The output of ALU from EX/MEM and MEM/WB pipeline register is always
feedback to the ALU inputs.
ii) If the Forwarding hardware detects that the previous ALU output serves as the
source for the current ALU operations, control logic selects the forwarded result
as the input rather than the value read from the register file. Forwarded results are
required not only from the immediate previous instruction, but also from an instruction
that started 2 cycles earlier. The result of ith instruction Is required to be forwarded to
(i+2)th instruction also. Forwarding can be generalized to include passing a result directly
to the functional unit that requires it.
LD R1, 0(R2)
DADD R3, R1, R4
AND R5, R1, R6
OR R7, R1, R8
The pipelined data path for these instructions is shown in the timing diagram (figure 2.7)
Page 27
Advance Computer Architecture 10CS74
The LD instruction gets the data from the memory at the end of cycle 4. even with
forwarding technique, the data from LD instruction can be made available earliest during
clock cycle 5. DADD instruction requires the result of LD instruction at the beginning of
clock cycle 5. DADD instruction requires the result of LD instruction at the beginning of
clock cycle 4. This demands data forwarding of clock cycle 4. This demands data
forwarding in negative time which is not possible. Hence, the situation calls for a pipeline
stall.Result from the LD instruction can be forwarded from the pipeline register to the
and instruction which begins at 2 clock cycles later after the LD instruction. The load
instruction has a delay or latency that cannot be eliminated by forwarding alone. It is
necessary to stall pipeline by 1 clock cycle. A hardware called Pipeline interlock detects a
hazard and stalls the pipeline until the hazard is cleared. The pipeline interlock helps to
preserve the correct execution pattern by introducing a stall or bubble. The CPI for the
stalled instruction increases by the length of the stall. Figure 2.7 shows the pipeline
before and after the stall. Stall causes the DADD to move 1 clock cycle later in time.
Forwarding to the AND instruction now goes through the register file or forwarding is
not required for the OR instruction. No instruction is started during the clock cycle 4.
Control Hazard
When a branch is executed, it may or may not change the content of PC. If a branch is
taken, the content of PC is changed to target address. If a branch is taken, the content of
PC is not changed
The simple way of dealing with the branches is to redo the fetch of the instruction
following a branch. The first IF cycle is essentially a stall, because, it never performs
useful work. One stall cycle for every branch will yield a performance loss 10% to 30%
depending on the branch frequency
Page 28
Advance Computer Architecture 10CS74
There are many methods for dealing with the pipeline stalls caused by branch
delay
1. Freeze or Flush the pipeline, holding or deleting any instructions after the
ranch until the branch destination is known. It is a simple scheme and branch penalty is
fixed and cannot be reduced by software
2. Treat every branch as not taken, simply allowing the hardware to continue as if
the branch were not to executed. Care must be taken not to change the processor
state until the branch outcome is known.
Instructions were fetched as if the branch were a normal instruction. If the branch
is taken, it is necessary to turn the fetched instruction in to a no-of instruction and restart
the fetch at the target address. Figure 2.8 shows the timing diagram of both the situations.
3. Treat every branch as taken: As soon as the branch is decoded and target
Address is computed, begin fetching and executing at the target if the branch target is
known before branch outcome, then this scheme gets advantage.
For both predicated taken or predicated not taken scheme, the compiler can
improve performance by organizing the code so that the most frequent path
matches the hardware choice.
4. Delayed branch technique is commonly used in early RISC processors.
In a delayed branch, the execution cycle with a branch delay of one is
Branch instruction
Sequential successor-1
Branch target if taken
The sequential successor is in the branch delay slot and it is executed irrespective of
whether or not the branch is taken. The pipeline behavior with a branch delay is shown in
Figure 2.9. Processor with delayed branch, normally have a single instruction delay.
Compiler has to make the successor instructions valid and useful there are three ways in
Page 29
Advance Computer Architecture 10CS74
Page 30
Advance Computer Architecture 10CS74
Types of exceptions:
The term exception is used to cover the terms interrupt, fault and exception.
I/O device request, page fault, Invoking an OS service from a user program, Integer
arithmetic overflow, memory protection overflow, Hardware malfunctions, Power failure
etc. are the different classes of exception. Individual events have important characteristics
that determine what action is needed corresponding to that exception.
If the event occurs at the same place every time the program is executed with the
same data and memory allocation, the event is asynchronous. Asynchronous events are
caused by devices external to the CPU and memory such events are handled after the
completion of the current instruction.
Page 31
Advance Computer Architecture 10CS74
NOTE:
1. with pipelining multiple exceptions may occur in the same clock cycle because
there are multiple instructions in execution.
2 Handling the exception becomes still more complicated when the instructions are
allowed to execute in out of order fashion.
Operation: send out the [PC] and fetch the instruction from memory in to the Instruction
Register (IR). Increment PC by 4 to address the next sequential instruction.
Operation: decode the instruction and access that register file to read the registers
( rs and rt). File to read the register (rs and rt). A & B are the temporary registers.
Operands are kept ready for use in the next cycle.
Page 32
Advance Computer Architecture 10CS74
Decoding is done in concurrent with reading register. MIPS ISA has fixed length
Instructions. Hence, these fields are at fixed locations.
Operation: ALU adds the operands to compute the effective address and places
the result in to the register ALU output.
Register – Register ALU instruction:
Operation: The ALU performs the operation specified by the function code on the value
taken from content of register A and register B.
*. Register- Immediate ALU instruction:
Operation: the content of register A and register Imm are operated (function Op) and
result is placed in temporary register ALU output.
*. Branch:
Page 33
Advance Computer Architecture 10CS74
UNIT - 3
Hardware-based speculation.
7 Hours
Page 34
Advance Computer Architecture 10CS74
UNIT III
Instruction Level Parallelism
The potential overlap among instruction execution is called Instruction Level Parallelism
(ILP) since instructions can be executed in parallel. There are mainly two approaches to
exploit ILP.
Factors of both programs and processors limit the amount of parallelism that can be
exploited among instructions and these limit the performance achievable. The
performance of the pipelined processors is given by:
Pipeline CPI= Ideal Pipeline CPI + Structural stalls + Data hazard stalls + Control stalls
By reducing each of the terms on the right hand side, it is possible to minimize the overall
pipeline CPI.
To exploit the ILP, the primary focus is on Basic Block (BB). The BB is a straight line
code sequence with no branches in except the entry and no branches out except at the
exit. The average size of the BB is very small i.e., about 4 to 6 instructions. The flow
diagram segment of a program is shown below (Figure 3.1). BB1 , BB2 and BB3 are the
Basic Blocks.
Page 35
Advance Computer Architecture 10CS74
The amount of overlap that can be exploited within a Basic Block is likely to be less than
the average size of BB. To further enhance ILP, it is possible to look at ILP across
multiple BB. The simplest and most common way to increase the ILP is to exploit the
parallelism among iterations of a loop (Loop level parallelism). Each iteration of a loop
can overlap with any other iteration.
Data Dependences
An instruction j is data dependant on instruction i if either of the following holds:
i) Instruction i produces a result that may be used by instruction j
Eg1: i: L.D F0, 0(R1)
j: ADD.D F4, F0, F2
ith instruction is loading the data into the F0 and jth instruction use F0 as one the
operand. Hence, jth instruction is data dependant on ith instruction.
Eg2: DADD R1, R2, R3
DSUB R4, R1, R5
Dependences are the property of the programs. A Data value may flow between
instructions either through registers or through memory locations. Detecting the data flow
and dependence that occurs through registers is quite straight forward. Dependences that
flow through the memory locations are more difficult to detect. A data dependence
convey three things.
Page 36
Advance Computer Architecture 10CS74
Name Dependences
A Name Dependence occurs when two instructions use the same Register or Memory
location, but there is no flow of data between the instructions associated with that name.
ii) Output dependence: Output Dependence occurs when instructions i and j write to the
same register or memory location.
Ex: ADD.D F4, F0, F2
SUB.D F4, F3, F5
The ordering between the instructions must be preserved to ensure that the value finally
written corresponds to instruction j.The above instruction can be reordered or can be
executed simultaneously if the name of the register is changed. The renaming can be
easily done either statically by a compiler or dynamically by the hardware.
Data hazard: Hazards are named by the ordering in the program that must be preserved
by the pipeline
RAW (Read After Write): j tries to read a source before i writes it, so j in correctly gets
old value, this hazard is due to true data dependence.
WAW (Write After Write): j tries to write an operand before it is written by i. WAW
hazard arises from output dependence.
WAR (Write After Read): j tries to write a destination before it is read by i, so that I
incorrectly gets the new value. WAR hazard arises from an antidependence and normally
cannot occur in static issue pipeline.
CONTROL DEPENDENCE:
A control dependence determines the ordering of an instruction i with respect to a branch
instruction,
Ex: if P1 {
S1;
}
if P2 {
S2;
}
S1 is Control dependent on P1 and
Page 37
Advance Computer Architecture 10CS74
The compiler can increase the amount of available ILP by transferring loops.
for(i=1000; i>0 ;i=i-1)
X[i] = X[i] + s;
We see that this loop is parallel by the noticing that body of the each iteration is
independent.
The first step is to translate the above segment to MIPS assembly language
Loop: L.D F0, 0(R1) : F0=array element
ADD.D F4, F0, F2 : add scalar in F2
S.D F4, 0(R1) : store result
DADDUI R1, R1, #-8 : decrement pointer
: 8 Bytes (per DW)
BNE R1, R2, Loop : branch R1! = R2
Without any Scheduling the loop will execute as follows and takes 9 cycles for each
iteration.
1 Loop: L.D F0, 0(R1) ;F0=vector element
2 stall
3 ADD.D F4, F0, F2 ;add scalar in F2
4 stall
5 stall
6 S.D F4, 0(R1) ;store result
7 DADDUI R1, R1,# -8 ;decrement pointer 8B (DW)
8 stall ;assumes can’t forward to branch
9 BNEZ R1, Loop ;branch R1!=zero
We can schedule the loop to obtain only two stalls and reduce the time to 7 cycles:
L.D F0, 0(R1)
Page 38
Advance Computer Architecture 10CS74
Stall
Stall
Loop Unrolling can be used to minimize the number of stalls. Unrolling the body of the
loop by our times, the execution of four iteration can be done in 27 clock cycles or 6.75
clock cycles per iteration.
3 ADD.D F4,F0,F2
7 L.D F6,-8(R1)
9 ADD.D F8,F6,F2
13 L.D F10,-16(R1)
15 ADD.D F12,F10,F2
19 L.D F14,-24(R1)
21 ADD.D F16,F14,F2
24 S.D -24(R1),F16
26 BNEZ R1,LOOP
Unrolled loop that minimizes the stalls to 14 clock cycles for four iterations is given
below:
1 Loop: L.D F0, 0(R1)
Page 39
Advance Computer Architecture 10CS74
9 S.D 0(R1), F4
10 S.D -8(R1), F8
The loop unrolling requires understanding how one instruction depends on another and
how the instructions can be changed or reordered given the dependences:
1. Determine loop unrolling useful by finding that loop iterations were independent
(except for maintenance code)
2. Use different registers to avoid unnecessary constraints forced by using same registers
for different computations
3. Eliminate the extra test and branch instructions and adjust the loop termination and
iteration code
4. Determine that loads and stores in unrolled loop can be interchanged by observing that
loads and stores from different iterations are independent
5. Schedule the code, preserving any dependences needed to yield the same result as the
original code
Page 40
Advance Computer Architecture 10CS74
To reduce the Branch cost, prediction of the outcome of the branch may be done.
The prediction may be done statically at compile time using compiler support or
dynamically using hardware support. Schemes to reduce the impact of control hazard are
discussed below:
Assume that the branch will not be taken and continue execution down the
sequential instruction stream. If the branch is taken, the instruction that are being fetched
and decoded must be discarded. Execution continues at the branch target. Discarding
instructions means we must be able to flush instructions in the IF, ID and EXE stages.
Alternately, it is possible that the branch can be predicted as taken. As soon as the
instruction decoded is found as branch, at the earliest, start fetching the instruction from
the target address.
The graph shows the misprediction rate for set of SPEC benchmark
programs
Page 41
Advance Computer Architecture 10CS74
• Problem: in a loop, 1-bit BHT will cause two mispredictions (average is 9 iterations
before exit):
– End of loop case, when it exits instead of looping as before
– First time through loop on next time through code, when it predicts exit instead of
looping
• Simple two bit history table will give better performance. The four different states of 2
bit predictor is shown in the state transition diagram.
Page 42
Advance Computer Architecture 10CS74
• Idea: record m most recently executed branches as taken or not taken, and use that
pattern to select the proper n-bit branch history table (BHT)
• In general, (m,n) predictor means record last m branches to select between 2m history
tables, each with n-bit counters
• Each entry in table has m n-bit predictors. In case of (2,2) predictor, behavior of recent
branches selects between four predictions of next branch, updating just that prediction.
The scheme of the table is shown:
Tournament predictor is a multi level branch predictor and uses n bit saturating counter
to chose between predictors. The predictors used are global predictor and local predictor.
– A typical tournament predictor will select the global predictor almost 40% of the
time for the SPEC integer benchmarks and less than 15% of the time for the SPEC
FP benchmarks
– Existing tournament predictors use a 2-bit saturating counter per branch to choose
among two different predictors based on which predictor was most effective oin
recent prediction.
Page 43
Advance Computer Architecture 10CS74
Tomasulu idea:
1. Have reservation stations where register renaming is possible
2. Results are directly forwarded to the reservation station along with the final
registers. This is also called short circuiting or bypassing.
ROB:
1.The instructions are stored sequentially but we have indicators to say if it is speculative
or completed execution.
2. If completed execution and not speculative and reached head of the queue then we
commit it.
Page 44
Advance Computer Architecture 10CS74
Page 45
Advance Computer Architecture 10CS74
3. allow instructions to execute and complete out of order but force them to commit in
order
4. Add hardware called the reorder buffer (ROB), with registers to hold the result of
an instruction between completion and commit
Page 46
Advance Computer Architecture 10CS74
The position of Reservation stations, ROB and FP registers are indicated below:
Assume latencies load 1 clock, add 2 clocks, multiply 10 clocks, divide 40 clocks
Show data structures just before MUL.D goes to commit…
Reservation Stations
At the time MUL.D is ready to commit only the two L.D instructions have already
committed,though others have completed execution
Actually, the MUL.D is at the head of the ROB – the L.D instructions are shown only for
understanding purposes #X represents value field of ROB entry number X
Page 47
Advance Computer Architecture 10CS74
Reorder Buffer
Example
Loop: LD F0 0 R1
MULTD F4 F0 F2
SD F4 0 R1
SUBI R1 R1 #8
BNEZ R1 Loop
Page 48
Advance Computer Architecture 10CS74
Reorder Buffer
Notes
• If a branch is mispredicted, recovery is done by flushing the ROB of all entries that
appear after the mispredicted branch
• entries before the branch are allowed to continue
• restart the fetch at the correct branch successor
• When an instruction commits or is flushed from the ROB then the corresponding slots
become available for subsequent instructions
Page 49
Advance Computer Architecture 10CS74
UNIT - IV
Page 50
Advance Computer Architecture 10CS74
UNIT IV
What is ILP?
• Instruction Level Parallelism
– Number of operations (instructions) that can be performed in parallel
• Formally, two instructions are parallel if they can execute simultaneously in a pipeline
of arbitrary depth without causing any stalls assuming that the pipeline has sufficient
resources
– Primary techniques used to exploit ILP
• Deep pipelines
• Multiple issue machines
• Basic program blocks tend to have 4-8 instructions between branches
– Little ILP within these blocks
– Must find ILP between groups of blocks
lw $10, 12($1)
sub $11, $2, $3
and $12, $4, $5
or $13, $6, $7
add $14, $8, $9
lw $10, 12($1)
sub $11, $2, $10
and $12, $11, $10
or $13, $6, $7
add $14, $8, $13
Finding ILP:
• Must deal with groups of basic code blocks
• Common approach: loop-level parallelism
Page 51
Advance Computer Architecture 10CS74
– Example:
– In MIPS (assume $s0 initialized properly):
Loop Unrolling:
• Technique used to help scheduling (and performance)
• Copy the loop body and schedule instructions from different iterations of the
loop gether
• MIPS example (from prev. slide):
Page 52
Advance Computer Architecture 10CS74
Page 53
Advance Computer Architecture 10CS74
• Benefit
– CPIs go below one, use IPC instead (instructions/cycle)
– Example: Issue width = 3 instructions, Clock = 3GHz
• Peak rate: 9 billion instructions/second, IPC = 3
• For our 5 stage pipeline, 15 instructions “in flight” at any given time
• Multiple Issue types
– Static
• Most instruction scheduling is done by the compiler
– Dynamic (superscalar)
• CPU makes most of the scheduling decisions
• Challenge: overcoming instruction dependencies
– Increased latency for loads
– Control hazards become worse
• Requires a more ambitious design
– Compiler techniques for scheduling
– Complex instruction decoding logic
Instruction Issuing
• Have to decide which instruction types can issue in a cycle
– Issue packet: instructions issued in a single clock cycle
– Issue slot: portion of an issue packet
• Compiler assumes a large responsibility for hazard checking, scheduling, etc.
Static Multiple Issue
For now, assume a “souped-up” 5-stage MIPS pipeline that can issue a packet with:
– One slot is an ALU or branch instruction
One slot is a load/store instruction
Page 54
Advance Computer Architecture 10CS74
Page 55
Advance Computer Architecture 10CS74
Page 56
Advance Computer Architecture 10CS74
Page 57
Advance Computer Architecture 10CS74
• Register Renaming
– Use more registers than are defined by the architecture
• Architectural registers: defined by ISA
• Physical registers: total registers
– Help with name dependencies
• Antidependence
– Write after Read hazard
• Output dependence
– Write after Write hazard
Page 58
Advance Computer Architecture 10CS74
Write Back
– Functional unit places on CDB
• Goes to both register file and reservation stations
– Use of CDB enables forwarding for RAW hazards
– Also introduces a latency between result and use of a value
Reservation Stations
• Require 7 fields
– Operation to perform on operands (2 operands)
– Tags showing which RS/Func. Unit will be producing operand (or zero if operand
available/unnecessary)
– Two source operand values
– A field for holding memory address calculation data
• Initially, immediate field of instruction
• Later, effective address
– Busy
• Indicates that RS and its functional unit are busy
• Register file support
– Each entry contains a field that identifies which RS/func. unit will be writing into this
entry (or blank/zero if noone will be writing to it) Limitation of Current Machine
Page 59
Advance Computer Architecture 10CS74
• For wide-issue machines, may issue one branch per clock cycle!
• Desire:
– Predict branch direction to get more ILP
– Eliminate control dependencies
• Approach:
– Predict branches, utilize speculative instruction execution
– Requires mechanisms for “fixing” machine when speculation is incorrect
Tomasulo’s w/Hardware Speculation
Committing Instructions
Look at head of ROB
• Three types of instructions
– Incorrectly predicted branch
• Indicates speculation was wrong
• Flush ROB
• Execution restarts at proper location – Store
• Update memory
• Remove store from ROB
– Everything else
• Update registers
• Remove instruction from ROB
Page 61
Advance Computer Architecture 10CS74
Page 62
Advance Computer Architecture 10CS74
Overview of P4
Page 63
Advance Computer Architecture 10CS74
Pentium 4 Pipeline
• See handout for overview of major steps
• Prescott (90nm version of P4) had 31 pipeline stages
– Not sure how pipeline is divided up
–
Page 64
Advance Computer Architecture 10CS74
Page 65
Advance Computer Architecture 10CS74
• Store-to-Load Forwarding
– Stores must wait to write until non-speculative
– Loads occasionally want data from store location
– Check both cache and Store Forwarding Buffer
• SFB is where stores are waiting to be written
– If hit when comparing load address to SFB address, use SFB data, not cache data
• Done on a partial address
• Memory Ordering Buffer
– Ensures that store-to-load forwarding was correct
• If not, must re-execute load
– Force forwarding
• Mechanism for forwarding in case addresses are misaligned
• MOB can tell SFB to forward or not
– False forwarding
• Fixes partial address match between load and SFB
Page 66
Advance Computer Architecture 10CS74
Page 67
Advance Computer Architecture 10CS74
P4: CPI
Page 68
Advance Computer Architecture 10CS74
Page 69
Advance Computer Architecture 10CS74
PART - B
UNIT - 5
Basics of synchronization
Page 70
Advance Computer Architecture 10CS74
UNIT V
Page 71
Advance Computer Architecture 10CS74
Page 72
Advance Computer Architecture 10CS74
1. MIMDs offer flexibility. With the correct hardware and software support, MIMDs
can function as single-user multiprocessors focusing on high performance for one
application, as multiprogrammed multiprocessors running many tasks simultaneously, or
as some combination of these functions.
With an MIMD, each processor is executing its own instruction stream. In many cases,
each processor executes a different process. Recall from the last chapter, that a process is
an segment of code that may be run independently, and that the state of the process
contains all the information necessary to execute that program on a processor. In a
multiprogrammed environment, where the processors may be running independent tasks,
each process is typically independent of the processes on other processors. It is also
useful to be able to have multiple processors executing a single program and sharing the
code and most of their address space. When multiple processes share code and data in
this way, they are often called threads
. Today, the term thread is often used in a casual way to refer to multiple loci of
execution that may run on different processors, even when they do not share an address
space. To take advantage of an MIMD multiprocessor with n processors, we must usually
have at least n threads or processes to execute. The independent threads are typically
identified by the programmer or created by the compiler. Since the parallelism in this
situation is contained in the threads, it is called thread-level parallelism.
Page 73
Advance Computer Architecture 10CS74
distinction is that such parallelism is identified at a high-level by the software system and
that the threads consist of hundreds to millions of instructions that may be executed in
parallel. In contrast, instruction level parallelism is identified by primarily by the
hardware, though with software help in some cases, and is found and exploited one
instruction at a time.
Existing MIMD multiprocessors fall into two classes, depending on the number of
processors involved, which in turn dictate a memory organization and interconnect
strategy. We refer to the multiprocessors by their memory organization, because what
constitutes a small or large number of processors is likely to change over time.
The first group, which we call
Because there is a single main memory that has a symmetric relationship to all
processos and a uniform access time from any processor, these multiprocessors are often
called symmetric (shared-memory) multiprocessors ( SMPs), and this style of architecture
is sometimes called UMA for uniform memory access. This type of centralized
sharedmemory architecture is currently by far the most popular organization.
Page 74
Advance Computer Architecture 10CS74
Distributing the memory among the nodes has two major benefits. First, it is a
costeffective way to scale the memory bandwidth, if most of the accesses are to the local
memory in the node. Second, it reduces the latency for accesses to the local memory.
These two advantages make distributed memory attractive at smaller processor counts as
processors get ever faster and require more memory bandwidth and lower memory
latency. The key disadvantage for a distributed memory architecture is that
communicating data between processors becomes somewhat more complex and has
higher latency, at least when there is no contention, because the processors no longer
share a single centralized memory. As we will see shortly, the use of distributed memory
leads to two different paradigms for interprocessor communication. Typically, I/O as well
as memory is distributed among the nodes of the multiprocessor, and the nodes may be
small SMPs (2–8 processors). Although the use of multiple processors in a node together
with a memory and a network interface is quite useful from the cost-efficiency viewpoint.
Page 75
Advance Computer Architecture 10CS74
• Suppose you want to achieve a speedup of 80 with 100 processors. What fraction
of the original computation can be sequential?
Message-Passing Multiprocessor
- The address space can consist of multiple private address spaces that are
logically disjoint and cannot be addressed by a remote processor
Multicomputer (cluster):
Page 76
Advance Computer Architecture 10CS74
Cache Coherence
Unfortunately, caching shared data introduces a new problem because the view of
memory held by two different processors is through their individual caches, which,
without any additional precautions, could end up seeing two different values. I.e, If two
different processors have two different values for the same location, this difficulty is
generally referred to as cache coherence problem
Page 77
Advance Computer Architecture 10CS74
• Informally:
– “If P writes x and then P1 reads it, P’s write will be seen by P1 if the read
and write are sufficiently far apart”
– Writes to a single location are serialized: seen in one order
• Latest write will be seen
• Otherwise could see writes in illogical order (could see older
value after a newer value)
The above three properties are sufficient to ensure coherence,When a written value will
be seen is also important. This issue is defined by memory consistency model. Coherence
and consistency are complementary.
• migration: a data item can be moved to a local cache and used there in a
transparent fashion.
• replication for shared data that are being simultaneously read.
• both are critical to performance in accessing shared data.
To over come these problems, adopt a hardware solution by introducing a
protocol tomaintain coherent caches named as Cache Coherence Protocols
These protocols are implemented for tracking the state of any sharing of a data block.
Two classes of Protocols
• Directory based
• Snooping based
Page 78
Advance Computer Architecture 10CS74
Directory based
• Sharing status of a block of physical memory is kept in one location called the
directory.
• Directory-based coherence has slightly higher implementation overhead than
snooping.
• It can scale to larger processor count.
Snooping
• Every cache that has a copy of data also has a copy of the sharing status of the
block.
• No centralized state is kept.
• Caches are also accessible via some broadcast medium (bus or switch)
• Cache controller monitor or snoop on the medium to determine whether or not
they have a copy of a block that is represented on a bus or switch access.
Snooping protocols are popular with multiprocessor and caches attached to single
shared memory as they can use the existing physical connection- bus to memory, to
interrogate the status of the caches. Snoop based cache coherence scheme is implemented
on a shared bus. Any communication medium that broadcasts cache misses to all the
processors.
Write Invalidate
Page 79
Advance Computer Architecture 10CS74
Write Update
Example Protocol
Page 80
Advance Computer Architecture 10CS74
Page 81
Advance Computer Architecture 10CS74
Conclusion
• “End” of uniprocessors speedup => Multiprocessors
• Parallelism challenges: % parallalizable, long latency to remote memory
• Centralized vs. distributed memory
– Small MP vs. lower latency, larger BW for Larger MP
• Message Passing vs. Shared Address
– Uniform access time vs. Non-uniform access time
• Snooping cache over shared medium for smaller MP by invalidating other
cached copies on write
• Sharing cached data _ Coherence (values returned by a read), Consistency
(when a written value will be returned by a read)
• Shared medium serializes writes _ Write consistency
Implementation Complications
• Write Races:
– Cannot update cache until bus is obtained
• Otherwise, another processor may get bus first,
and then write the same cache block!
– Two step process:
• Arbitrate for bus
• Place miss on bus and complete operation
– If miss occurs to block while waiting for bus, handle miss (invalidate
may be needed) and then restart.
– Split transaction bus:
• Bus transaction is not atomic:
Page 82
Advance Computer Architecture 10CS74
Performance Measurement
• Overall cache performance is a combination of
– Uniprocessor cache miss traffic
– Traffic caused by communication – invalidation and subsequent cache
misses
• Changing the processor count, cache size, and block size can affect these two
components of miss rate
• Uniprocessor miss rate: compulsory, capacity, conflict
• Communication miss rate: coherence misses
– True sharing misses + false sharing misses
Example Result
Page 83
Advance Computer Architecture 10CS74
Directory Protocols
Page 84
Advance Computer Architecture 10CS74
Page 85
Advance Computer Architecture 10CS74
Page 86
Advance Computer Architecture 10CS74
• Why Synchronize?
Need to know when it is safe for different processes to use shared data
• Issues for Synchronization:
– Uninterruptable instruction to fetch and update memory (atomic
operation);
– User level synchronization operation using this primitive;
– For large scale MPs, synchronization can be a bottleneck; techniques to
reduce contention and latency of synchronization
Page 87
Advance Computer Architecture 10CS74
Page 88
Advance Computer Architecture 10CS74
• Key idea: allow reads and writes to complete out of order, but to use
synchronization operations to enforce ordering, so that a synchronized program behaves
as if the processor were sequentially consistent
– By relaxing orderings, may obtain performance advantages
– Also specifies range of legal compiler optimizations on shared data
– Unless synchronization points are clearly defined and programs are
synchronized, compiler could not interchange read and write of 2 shared data items
because might affect the semantics of the program
• 3 major sets of relaxed orderings:
1. W_R ordering (all writes completed before next read)
• Because retains ordering among writes, many programs that operate under
sequential consistency operate under this model, without additional
synchronization. Called processor consistency
2. W _ W ordering (all writes completed before next write)
3. R _ W and R _ R orderings, a variety of models depending on ordering
restrictions and how synchronization operations enforce ordering
• Many complexities in relaxed consistency models; defining precisely what it means for
a write to complete; deciding when processors can see values that it has written
Page 89
Advance Computer Architecture 10CS74
UNIT - VI
Introduction
Cache performance
Cache Optimizations
Virtual memory.
6 Hours
Page 90
Advance Computer Architecture 10CS74
UNIT VI
REVIEW OF MEMORY HIERARCHY
Goal is provide a memory system with cost per byte than the next lower level
• Each level maps addresses from a slower, larger memory to a smaller but faster
memory higher in the hierarchy.
– Address mapping
– Address checking.
• Hence protection scheme for address for scrutinizing addresses are also part of
the memory hierarchy.
Page 91
Advance Computer Architecture 10CS74
• Prototype
`– When a word is not found in cache
• Fetched from memory and placed in cache with the address tag.
• Multiple words( block) is fetched for moved for efficiency reasons.
– key design
• Set associative
– Set is a group of block in the cache.
– Block is first mapped on to set.
» Find mapping
» Searching the set
Chosen by the address of the data:
(Block address) MOD(Number of sets in cache)
• n-block in a set
– Cache replacement is called n-way set associative.
Cache data
- Cache read.
- Cache write.
Write through: update cache and writes through to update memory.
Both strategies
- Use write buffer.
this allows the cache to proceed as soon as the data is placed in the
buffer rather than wait the full latency to write the data into memory.
Metric
used to measure the benefits is miss rate
No of access that miss
No of accesses
Write back: updates the copy in the cache.
• Causes of high miss rates
Page 92
Advance Computer Architecture 10CS74
Page 93
Advance Computer Architecture 10CS74
Cache Optimizations
Page 94
Advance Computer Architecture 10CS74
• How to combine fast hit time of Direct Mapped and have the lower conflict
misses of 2-way SA cache?
Page 95
Advance Computer Architecture 10CS74
• Way prediction: keep extra bits in cache to predict the “way,” or block within
the set, of next cache access.
– Multiplexer is set early to select desired block, only 1 tag comparison performed that
clock cycle in parallel with reading the cache data
– Miss _ 1st check other blocks for matches in next clock cycle
• Accuracy » 85%
• Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles
- Used for instruction caches vs. data caches
Page 96
Advance Computer Architecture 10CS74
• FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26
• Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19
Page 97
Advance Computer Architecture 10CS74
• Rather than treat the cache as a single monolithic block, divide into independent
banks that can support simultaneous accesses
– E.g.,T1 (“Niagara”) L2 has 4 banks
• Banking works best when accesses naturally spread themselves across banks _
mapping of addresses to banks affects behavior of memory system
• Simple mapping that works well is “sequential interleaving”
– Spread block addresses sequentially across banks
– E,g, if there 4 banks, Bank 0 has all blocks whose address modulo 4 is 0;
bank 1 has all blocks whose address modulo 4 is 1; …
Page 98
Advance Computer Architecture 10CS74
• Critical Word First—Request the missed word first from memory and
send it to the CPU as soon as it arrives; let the CPU continue execution while
filling the rest of the words in the block
– Long blocks more popular today _ Critical Word 1st Widely used
Page 99
Advance Computer Architecture 10CS74
Page 100
Advance Computer Architecture 10CS74
Page 101
Advance Computer Architecture 10CS74
UNIT - VII
Introduction
Protection
6 Hours
Page 102
Advance Computer Architecture 10CS74
UNIT VII
MEMORY HIERARCHY DESIGN
Summary: Caches
•The Principle of Locality:
–Program access a relatively small portion of the address space at any instant of
time.
•Temporal Locality OR Spatial Locality:
•Three Major Categories of Cache Misses:
–Compulsory Misses: sad facts of life. Example: cold start misses.
–Capacity Misses: increase cache size
–Conflict Misses: increase cache size and/or associativity
Page 103
Advance Computer Architecture 10CS74
•Write Policy:
–Write Through: needs a write buffer.
–Write Back: control can be complex
Summary:
Page 104
Advance Computer Architecture 10CS74
Cache Organization?
•Assume total cache size not changed
Cache Optimisation
Why improve Cache performance:
Page 105
Advance Computer Architecture 10CS74
Page 106
Advance Computer Architecture 10CS74
– Miss - 1st check other blocks for matches in next clock cycle
Page 107
Advance Computer Architecture 10CS74
• Banking works best when accesses naturally spread themselves across banks ı
mapping of addresses to banks affects behavior of memory system
Page 108
Advance Computer Architecture 10CS74
Blocking Example
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{r = 0;
for (k = 0; k < N; k = k+1){
r = r + y[i][k]*z[k][j];};
x[i][j] = r;
};
/* After */
for (jj = 0; jj < N; jj = jj+B)
for (kk = 0; kk < N; kk = kk+B)
for (i = 0; i < N; i = i+1)
for (j = jj; j < min(jj+B,N); j = j+1)
{r = 0;
for (k = kk; k < min(kk+B,N); k = k + 1)
r = r + y[i][k]*z[k][j];
x[i][j] = x[i][j] + r;
};
Snapshot of x, y, z when
i=1
Page 110
Advance Computer Architecture 10CS74
White:
Merging Arrays
•Motivation: some programs reference multiple arrays in the same dimension with the
same indices at the same time =>
these accesses can interfere with each other,leading to conflict misses
•Solution: combine these independent matrices into a single compound array, so that a
single cache block can contain the desired elements
Merging Arrays Example
Page 111
Advance Computer Architecture 10CS74
Loop Fusion
• Some programs have separate sections of code that access with the same
loops, performing different computations on the common data
• Solution:
“Fuse” the code into a single loop =>
the data that are fetched into the cache can be used repeatedly before being
swapped out => reducing misses via improved temporal locality
Data Prefetching
– Pentium 4 can prefetch data into L2 cache from up to 8 streams from 8 different
4 KB pages
– Prefetching invoked if 2 successive L2 cache misses to a page,if distance
between those cache blocks is < 256 bytes
Page 112
Advance Computer Architecture 10CS74
• Non-volatile, magnetic
• Lost to 4 Kbit DRAM (today using 512Mbit DRAM)
• Access time 750 ns, cycle time 1500-3000 ns
DRAM logical organization (4 Mbit)
Quest for DRAM Performance
1. Fast Page mode
– Add timing signals that allow repeated accesses to row buffer without
nother row access time
– Such a buffer comes naturally, as each array will buffer 1024 to 2048
bits for each access
2. Synchronous DRAM (SDRAM)
– Add a clock signal to DRAM interface, so that the repeated transfers
would not bear overhead to synchronize with DRAM controller
3. Double Data Rate (DDR SDRAM)
– Transfer data on both the rising edge and falling edge of the DRAM
clock signal I doubling the peak data rate
– DDR2 lowers power by dropping the voltage from 2.5 to 1.8 volts +
offers higher clock rates: up to 400 MHz
– DDR3 drops to 1.5 volts + higher clock rates: up to 800 MHz
4.Improved Bandwidth, not Latency
DRAM name based on Peak Chip Transfers / Sec
DIMM name based on Peak DIMM MBytes / Sec
Need for Error Correction!
• Motivation:
– Failures/time proportional to number of bits!
– As DRAM cells shrink, more vulnerable
• Went through period in which failure rate was low enough without error
correction that people didn’t do correction
– DRAM banks too large now
– Servers always corrected memory systems
• Basic idea: add redundancy through parity bits
– Common configuration: Random error correction
• SEC-DED (single error correct, double error detect)
• One example: 64 data bits + 8 parity bits (11% overhead)
– Really want to handle failures of physical components as well
• Organization is multiple DRAMs/DIMM, multiple DIMMs
• Want to recover from failed DRAM and failed DIMM!
• “Chip kill” handle failures width of single DRAM chip
DRAM Technology
• Semiconductor Dynamic Random Access Memory
• Emphasize on cost per bit and capacity
• Multiplex address lines ı cutting # of address pins in half
– Row access strobe (RAS) first, then column access strobe (CAS)
– Memory as a 2D matrix – rows go to a buffer
– Subsequent CAS selects subrow
Page 114
Advance Computer Architecture 10CS74
Page 115
Advance Computer Architecture 10CS74
Protection:
Virtual Memory and Virtual Machines
Slide Sources: Based on “Computer Architecture” by Hennessy/Patterson.
Supplemented from various freely downloadable sources
Security and Privacy
•Innovations in Computer Architecture and System software
•Protection through Virtual Memory
•Protection from Virtual Machines
–Architectural requirements
–Performance
Protection via Virtual Memory
•Processes
–Running program
–Environment (state) needed to continue running it
•Protect Processes from each other
–Page based virtual memory including TLB which caches page table
entries –Example: Segmentation and paging in 80x86
Processes share hardware without interfering with each other
•Provide User Process and Kernel Process
•Readable portion of Processor state:
–User supervisor mode bit
–Exception enable/disable bit
–Memory protection information
Page 116
Advance Computer Architecture 10CS74
Page 117
Advance Computer Architecture 10CS74
Page 118
Advance Computer Architecture 10CS74
Page 119
Advance Computer Architecture 10CS74
UNIT - 8
Introduction
7 Hours
Page 120
Advance Computer Architecture 10CS74
UNIT VIII
Page 121
Advance Computer Architecture 10CS74
Page 122
Advance Computer Architecture 10CS74
Thus:
axj+b=cxk+d
The Greatest Common Divisor (GCD) Test
If a loop carried dependence exists, then :
GCD(c, a) must divide (d-b)
The GCD test is sufficient to guarantee no loop carried dependence
However there are cases where GCD test succeeds but no dependence exits because GCD
test does not take loop bounds into account
Example:
for (i=1; i<=100; i=i+1) {
x[2*i+3] = x[2*i] * 5.0;
}
a=2b=3c=2d=0
GCD(a, c) = 2
d - b = -3
2 does not divide -3 _ No loop carried dependence possible.
Page 123
Advance Computer Architecture 10CS74
below)
• When array indexing is indirect through another array, which happens with many
representations of sparse arrays
• When a dependence may exist for some value of the inputs, but does not exist in
actuality when the code is run since the inputs never take on those values
• When an optimization depends on knowing more than just the possibility of a
dependence, but needs to know on which write of a variable does a read of that variable
depend
Points-to analysis
Relies on information from three major sources:
1. Type information, which restricts what a pointer can point to.
2. Information derived when an object is allocated or when the address of an object is
taken, which can be used to restrict what a pointer can point to. For example, if p always
points to an object allocated in a given source line and q never points to that object, then
p and q can never point to the same object.
3. Information derived from pointer assignments. For example, if p may be assigned the
value of q, then p may point to anything q points to.
Eliminating dependent
computations
copy propagation, used to simplify sequences like the following:
DADDUI R1,R2,#4
DADDUI R1,R1,#4
to
DADDUI R1,R2,#8
Tree height reduction
•they reduce the height of the tree structure representing a computation, making it wider
but shorter.
Recurrence
Recurrences are expressions whose value on one iteration is given by a function that
depends onthe previous iterations.
sum = sum + x;
sum = sum + x1 + x2 + x3 + x4 + x5;
If unoptimized requires five dependent operations, but it can be rewritten as
sum = ((sum + x1) + (x2 + x3)) + (x4 + x5);
evaluated in only three dependent operations.
Page 124
Advance Computer Architecture 10CS74
•Finding parallelism
•Reducing control and data dependencies
•Using speculation
Software pipelining
•Symbolic loop unrolling
•Benefits of loop unrolling with reduced code size
•Instructions in loop body selected from different loop iterations
•Increase distance between dependent instructions in
SW pipelining example
Iteration i: L.D F0,0(R1)
Page 125
Advance Computer Architecture 10CS74
ADD.D F4,F0,F2
S.D F4,0(R1)
Iteration i+1: L.D F0,0(R1)
ADD.D F4,F0,F2
S.D F4,0(R1)
Iteration i+2: L.D F0,0(R1)
ADD.D F4,F0,F2
S.D F4,0(R1)
Page 126
Advance Computer Architecture 10CS74
Advantages
•Less code space than conventional unrolling
•Loop runs at peak speed during steady state
•Overhead only at loop initiation and termination
•Complements unrolling
Disadvantages
•Hard to overlap long latencies
•Unrolling combined with SW pipelining
•Requires advanced compiler transformations
Page 127
Advance Computer Architecture 10CS74
Page 128
Advance Computer Architecture 10CS74
Trace Scheduling:
•Focusing on the Critical Path
Page 129
Advance Computer Architecture 10CS74
Trace Scheduling,
Superblocks and Predicated Instructions
•For processor issuing more than one instruction on every clock cycle.
–Loop unrolling,
–software pipelining,
Page 130
Advance Computer Architecture 10CS74
Trace Scheduling
•Used when
– Predicated execution is not supported
– Unrolling is insufficient
• Best used
– If profile information clearly favors one path over the other
• Significant overheads are added to the infrequent path
• Two steps :
– Trace Selection
– Trace Compaction
–
Trace Selection
Likely sequence of basic blocks that can be put together
– Sequence is called a trace
• What can you select?
– Loop unrolling generates long traces
– Static branch prediction forces some straight-line code behavior
Trace Selection
Page 131
Advance Computer Architecture 10CS74
(cont.)
Trace Example
If the shaded portion in previous code was frequent path and it was unrolled 4 times : _
• Trace exits are jumps off the frequent path
• Trace entrances are returns to the trace
Trace Compaction
•Squeeze these into smaller number of wide instructions
• Move operations as early as it can be in a trace
• Pack the instructions into as few wide instructions as possible
• Simplifies the decisions concerning global code motion
– All branches are viewed as jumps into or out of the trace
• Bookkeeping
– Cost is assumed to be little
• Best used in scientific code with extensive loops
Page 132
Advance Computer Architecture 10CS74
Superblock Construction
Page 133
Advance Computer Architecture 10CS74
• Tail duplication
– Creates a separate block that corresponds to the portion of trace after the entry
• When proceeding as per prediction – Take the path of superblock code
• When exit from
superblock
– Residual loop that handles rest of the iterations
–
Analysis on Superblocks
• Reduces the complexity of bookkeeping and scheduling
– Unlike the trace approach
• Can have larger code size though
• Assessing the cost of duplication
• Compilation process is not simple any more
Page 134
Advance Computer Architecture 10CS74
Another example
• A = abs(B)
if (B < 0)
A = -B;
else
A = B;
• Two conditional moves
• One unconditional and one conditional move
• The branch condition has moved into the
instruction
– Control dependence becomes data dependence
–
Limitations of Conditional Moves
• Conditional moves are the simplest form of predicated instructions
• Useful for short sequences
• For large code, this can be inefficient
– Introduces many conditional moves
• Some architectures support full predication
– All instructions, not just moves
• Very useful in global scheduling
– Can handle nonloop branches nicely
– Eg : The whole if portion can be predicated if the frequent path is not taken
• Assume : Two issues, one to ALU and one to memory; or branch by itself
• Wastes a memory operation slot in second cycle
• Can incur a data dependence stall if branch is not taken
– R9 depends on R8
Predicated Execution
Assume : LWC is predicated load and loads if third operand is not 0
Page 135
Advance Computer Architecture 10CS74
Some Complications
• Exception Behavior
– Must not generate exception if the predicate is false
• If R10 is zero in the previous example
– LW R8, 0(R10) can cause a protection fault
• If condition is satisfied
– A page fault can still occur
• Biggest Issue – Decide when to annul an instruction
– Can be done during issue
• Early in pipeline
• Value of condition must be known early, can induce stalls
– Can be done before commit
• Modern processors do this
• Annulled instructions will use functional resources
• Register forwarding and such can complicate implementation
Page 136
Advance Computer Architecture 10CS74
Page 137
Advance Computer Architecture 10CS74
Predicated Execution
Assume : LWC is predicated load and loads if third operand is not 0
• One instruction issue slot is eliminated
• On mispredicted branch, predicated instruction will not have any effect
• If sequence following the branch is short, the entire block of the code can be predicated
Predication
Some Complications
• Exception Behavior
– Must not generate exception if the predicate is false
• If R10 is zero in the previous example
– LW R8, 0(R10) can cause a protection fault
• If condition is satisfied
– A page fault can still occur
• Biggest Issue – Decide when to annul an instruction
– Can be done during issue
-- Early in pipeline
• Value of condition must be known early, can induce stalls
– Can be done before commit
• Modern processors do this
• Annulled instructions will use functional resources
• Register forwarding and such can complicate implementation
Page 138
Advance Computer Architecture 10CS74
Exception classes
•Recoverable: exception from speculative instruction may harm performance, but not
preciseness
•Unrecoverable: exception from speculative instruction compromises preciseness
Page 139
Advance Computer Architecture 10CS74
Page 140
Advance Computer Architecture 10CS74
Page 141
Advance Computer Architecture 10CS74
EPIC
•EPIC – Overview
– Builds on VLIW
– Redefines instruction format
– Instruction coding tells CPU how to process data
– Very compiler dependent
– Predicated execution
EPIC pros and cons
•EPIC – Pros:
– Compiler has more time to spend with code
– Time spent by compiler is a one-time cost
– Reduces circuit complexity
Chip Layout
•Itanium Architecture Diagram
Page 142
Advance Computer Architecture 10CS74
Itanium Specs
•4 Integer ALU's
•4 multimedia ALU's
•2 Extended Precision FP Units
•2 Single Precision FP units
•2 Load or Store Units
•3 Branch Units
•10 Stage 6 Wide Pipeline
•32k L1 Cache
•96K L2 Cache
•4MB L3 Cache(extern)þ
•800Mhz Clock
Intel Itanium
•800 MHz
•10 stage pipeline
•Can issue 6 instructions (2 bundles) per cycle
•4 Integer, 4 Floating Point, 4 Multimedia, 2 Memory, 3 Branch Units
•32 KB L1, 96 KB L2, 4 MB L3 caches
•2.1 GB/s memory bandwidth
Page 143
Advance Computer Architecture 10CS74
Itanium2 Specs
•6 Integer ALU's
•6 multimedia ALU's
•2 Extended Precision FP Units
•2 Single Precision FP units
•2 Load and Store Units
•3 Branch Units
•8 Stage 6 Wide Pipeline
•32k L1 Cache
•256K L2 Cache
•3MB L3 Cache(on die)þ
•1Ghz Clock initially
–Up to 1.66Ghz on Montvale
Itanium2 Improvements
•Initially a 180nm process
–Increased to 130nm in 2003
–Further increased to 90nm in 2007
•Improved Thermal Management
•Clock Speed increased to 1.0Ghz
•Bus Speed Increase from 266Mhz to 400Mhz
Page 144
Advance Computer Architecture 10CS74
Page 145
Advance Computer Architecture 10CS74
On function call, machine shifts register window such that previous output registers
become new locals starting at r32
Page 146
Advance Computer Architecture 10CS74
Page 147
Advance Computer Architecture 10CS74
Page 148
Advance Computer Architecture 10CS74
Page 149
Advance Computer Architecture 10CS74
Instruction Encoding
•Each instruction includes the opcode and three operands
•Each instructions holds the identifier for a corresponding Predicate Register
•Each bundle contains 3 independent instructions
•Each instruction is 41 bits wide
•Each bundle also holds a 5 bit template field
Distributing Responsibility
_ILP Instruction Groups
_Control flow parallelism
Parallel comparison
Multiway branches
_Influencing dynamic events
Provides an extensive set of hints that the compiler uses to tell the hardware about likely
branch behavior (taken or not taken, amount to fetch at branch target) and memory
operations (in what level of the memory hierarchy to cache data).
Page 150
Advance Computer Architecture 10CS74
Page 151
Advance Computer Architecture 10CS74
Control speculation
_ Not all the branches can be removed using predication.
_ Loads have longer latency than most instructions and tend to start timecritical
chains of instructions
_ Constraints on code motion on loads limit parallelism
_ Non-EPIC architectures constrain motion of load instruction
_ IA-64: Speculative loads, can safely schedule load instruction before one or
more prior branches
Control Speculation
_Exceptions are handled by setting NaT (Not a Thing) in target register
_Check instruction-branch to fix-up code if NaT flag set
_Fix-up code: generated by compiler, handles exceptions
_NaT bit propagates in execution (almost all IA-64 instructions)
_NaT propagation reduces required check points
Speculative Load
_ Load instruction (ld.s) can be moved outside of a basic block even if branch target
is not known
_ Speculative loads does not produce exception - it sets the NaT
_ Check instruction (chk.s) will jump to fix-up code if NaT is set
Data Speculation
Page 152
Advance Computer Architecture 10CS74
_ The compiler may not be able to determine the location in memory being
referenced (pointers)
_ Want to move calculations ahead of a possible memory dependency
_ Traditionally, given a store followed by a load, if the compiler cannot
determine if the addresses will be equal, the load cannot be moved ahead of the
store.
_ IA-64: allows compiler to schedule a load before one or more stores
_ Use advance load (ld.a) and check (chk.a) to implement
_ ALAT (Advanced Load Address Table) records target register, memory
address accessed, and access size
Data Speculation
1. Allows for loads to be moved ahead of stores even if the compiler is unsure if
addresses are the same
2. A speculative load generates an entry in the ALAT
3. A store removes every entry in the ALAT that have the same address
4. Check instruction will branch to fix-up if the given address is not in the ALAT
Page 153
Advance Computer Architecture 10CS74
Register Model
_128 General and Floating Point Registers
_32 always available, 96 on stack
_As functions are called, compiler allocates a specific number of local and output
registers to use in the function by using register allocation instruction “Alloc”.
Page 154
Advance Computer Architecture 10CS74
On function call, machine shifts register window such that previous output registers
become new locals starting at r32
Software Pipelining
_loops generally encompass a large portion of a program’s execution time, so it’s
important to expose as much loop-level parallelism as possible.
_Overlapping one loop iteration with the next can often increase the parallelism.
Software Pipelining
Page 155
Advance Computer Architecture 10CS74
Page 156