Fundamentals of Computer Design
Fundamentals of Computer Design
Introduction
Today’s desktop computers (less than $500 cost) are having more performance,
larger memory and storage than a computer bought in 1085 for 1 million dollar. Highest
performance microprocessors of today outperform Supercomputers of less than 10 years
ago. The rapid improvement has come both from advances in the technology used to
build computers and innovations made in the computer design or in other words, the
improvement made in the computers can be attributed to innovations of technology and
architecture design.
During the first 25 years of electronic computers, both forces made a major
contribution, delivering performance improvement of about 25% per year.
Microprocessors were evolved during late 1970s and their ability along with
improvements made in the Integrated Circuit (IC) technology contributed to 35%
performance growth per year.
The virtual elimination of assembly language programming reduced the need for
object-code compatibility. The creation of standardized vendor-independent operating
system lowered the cost and risk of bringing out a new architecture.
In the yearly 1980s, the Reduced Instruction Set Computer (RISC) based machines
focused the attention of designers on two critical performance techniques, the
exploitation Instruction Level Parallelism (ILP) and the use of caches. The figure 1.1
shows the growth in processor performance since the mid 1980s. The graph plots
performance relative to the VAX-11/780 as measured by the SPECint benchmarks. From
the figure it is clear that architectural and organizational enhancements led to 16 years of
sustained growth in performance at an annual rate of over 50%.
Since 2002, processor performance improvement has dropped to about 20% per year due
to the following hurdles:
Maximum power dissipation of air-cooled chips
Little ILP left to exploit efficiently
Limitations laid by memory latency
The hurdles signals historic switch from relying solely on ILP to Thread Level
Parallelism (TLP) and Data Level Parallelism (DLP).
1
Classes of Computers
2
Desktop computing
The first and still the largest market in dollar terms is desktop computing. Desktop
computing system cost range from $ 500 (low end) to $ 5000 (high-end configuration).
Throughout this range in price, the desktop market tends to drive to optimize price-
performance. The performance concerned is compute performance and graphics
performance. The combination of performance and price are the driving factors to the
customers and the computer designer. Hence, the newest, high performance and cost
effective processor often appears first in desktop computers.
Servers:
Servers provide large-scale and reliable computing and file services and are mainly
used in the large-scale enterprise computing and web based services. The three important
characteristics of servers are:
Dependability: Severs must operate 24x7 hours a week. Failure of server system
is far more catastrophic than a failure of desktop. Enterprise will lose revenue if
the server is unavailable.
Scalability: as the business grows, the server may have to provide more
functionality/ services. Thus ability to scale up the computing capacity, memory,
storage and I/O bandwidth is crucial.
Throughput: transactions completed per minute or web pages served per second
are crucial for servers.
Embedded Computers
3
i) Class of ISA: Nearly all ISAs today are classified as General-Purpose-
Register architectures. The operands are either Registers or Memory locations.
The two popular versions of this class are:
Register-Memory ISAs : ISA of 80x86, can access memory as part of many
instructions.
Load-Store ISA Eg. ISA of MIPS, can access memory only with Load or
Store instructions.
ii) Memory addressing: Byte addressing scheme is most widely used in all
desktop and server computers. Both 80x86 and MIPS use byte addressing.
Incase of MIPS the object must be aligned. An access to an object of s byte at
byte address A is aligned if A mod s =0. 80x86 does not require alignment.
Accesses are faster if operands are aligned.
iii) Addressing modes:
Specify the address of a M object apart from register and constant operands.
MIPS Addressing modes:
Register mode addressing
Immediate mode addressing
Displacement mode addressing
80x86 in addition to the above addressing modes supports the
additional modes of addressing:
i. Register Indirect
ii. Indexed
iii. Based with Scaled index
4
MIPS 80x86
• Conditional Branches tests content of Register Condition code bits
• Procedure Call JAL CALLF
• Return Address in a R Stack in M
vii) Encoding an ISA
Trends in Technology
Storage Technology:
Before 1990: the storage density increased by about 30% per year.
After 1990: the storage density increased by about 60% per year.
Disks are still 50 to 100 times cheaper per bit than DRAM.
Network Technology:
Network performance depends both on the performance of the switches and on
the performance of the transmission system.
Although the technology improves continuously, the impact of these improvements can
be in discrete leaps.
Performance trends: Bandwidth or throughput is the total amount of work done in given
time. Latency or response time is the time between the start and the completion of an
event. (for eg. Millisecond for disk access)
5
10000
Network
Relative
BW Disk
100
Improve
ment
10
(Latency improvement
1 = Bandwidth improvement)
1 10 100
Relative Latency Improvement
For CMOS chips, the dominant source of energy consumption is due to switching
transistor, also called as Dynamic power and is given by the following equation.
6
2
Power dynamic= (1/2)*Capacitive load* Voltage * Frequency switched
2
Energydynamic = Capacitive load x Voltage
•For a fixed task, slowing clock rate (frequency switched) reduces power, but not energy
•Capacitive load a function of number of transistors connected to output and technology,
•Distributing the power, removing the heat and preventing hot spots have become
increasingly difficult challenges.
The leakage current flows even when a transistor is off. Therefore static power is
equally important.
Power static= Current static * Voltage
Trends in Cost
The underlying principle that drives the cost down is the learning curve-
manufacturing costs decrease over time.
Volume is a second key factor in determining cost. Volume decreases cost since it
increases purchasing manufacturing efficiency. As a rule of thumb, the cost decreases
about 10% for each doubling of volume.
Cost of an Integrated Circuit
Although the cost of ICs have dropped exponentially, the basic process of silicon
manufacture is unchanged. A wafer is still tested and chopped into dies that are
packaged.
Cost of IC = Cost of [die+ testing die+ Packaging and final test] / (Final test yoeld)
7
The number of dies per wafer is approximately the area of the wafer divided by the area
of the die.
Die per wafer = [π * (Wafer Dia/2)2/Die area]-[π* wafer dia/√(2*Die area)]
The first term is the ratio of wafer area to die area and the second term compensates for
the rectangular dies near the periphery of round wafers(as shown in figure).
Dependability:
The Infrastructure providers offer Service Level Agreement (SLA) or Service
Level Objectives (SLO) to guarantee that their networking or power services would be
dependable.
Systems alternate between 2 states of service with respect to an SLA:
1. Service accomplishment, where the service is delivered as specified in SLA
2. Service interruption, where the delivered service is different from the SLA
• Failure = transition from state 1 to state 2
• Restoration = transition from state 2 to state 1
8
The two main measures of Dependability are Module Reliability and Module
Availability. Module reliability is a measure of continuous service accomplishment (or
time to failure) from a reference initial instant.
Example:
Calculate Failures in Time(FIT) and MTTF for the following system comprising of:
10 disks (1Million hour MTTF per disk), 1 disk controller (0.5Million hour MTTF) and 1
power supply (0.2Million hour MTTF).
Performance:
The Execution time or Response time is defined as the time between the start and
completion of an event. The total amount of work done in a given time is defined as the
Throughput.
The Administrator of a data center may be interested in increasing the
Throughput. The computer user may be interested in reducing the Response time.
Computer user says that computer is faster when a program runs in less time.
9
1
Performance = ----------------------------
Execution Time (X)
The phrase “X is faster than Y” is used to mean that the response time or execution time
is lower on X than Y for the given task. “X is n times faster than Y” means
Performancex = n* Perfromancey
The routinely executed programs are the best candidates for evaluating the performance
of the new computers. To evaluate new system the user would simply compare the
execution time of their workloads.
Benchmarks
The real applications are the best choice of benchmarks to evaluate the performance.
However, for many of the cases, the workloads will not be known at the time of
evaluation. Hence, the benchmark program which resemble the real applications are
chosen. The three types of benchmarks are:
KERNELS, which are small, key pieces of real applications;
Toy Programs: which are 100 line programs from beginning programming
assignments, such Quicksort etc.,
Synthetic Benchmarks: Fake programs invented to try to match the profile and
behavior of real applications such as Dhrystone.
To make the process of evaluation a fair justice, the following points are to be followed.
Source code modifications are not allowed.
Source code modifications are allowed, but are essentially impossible.
Source code modifications are allowed, as long as the modified version produces the
same output.
• To increase predictability, collections of benchmark applications, called
benchmark suites, are popular
• SPECCPU: popular desktop benchmark suite given by Standard Performance
Evaluation committee (SPEC)
– CPU only, split between integer and floating point programs
– SPECint2000 has 12 integer, SPECfp2000 has 14 integer programs
– SPECCPU2006 announced in Spring 2006.
– SPECSFS (NFS file server) and SPECWeb (WebServer) added as server
benchmarks
10
• Transaction Processing Council measures server performance and cost-
performance for databases
– TPC-C Complex query for Online Transaction Processing
– TPC-H models ad hoc decision support
– TPC-W a transactional web benchmark
– TPC-App application server and web services benchmark
ExecutionTimereference
1.25 SPECRatioA ExecutionTimeA
SPECRatio B ExecutionTimereference
ExecutionTimeB
ExecutionTimeB PerformanceA
ExecutionTimeA PerformanceB
While designing the computer, the advantage of the following points can be
exploited to enhance the performance.
* Parallelism: is one of most important methods for improving performance.
- One of the simplest ways to do this is through pipelining ie, to over lap the instruction
Exe
cution to reduce the total time to complete an instruction sequence.
- Parallelism can also be exploited at the level of detailed digital design.
- Set- associative caches use multiple banks of memory that are typically searched in
parallel. Carry
look ahead which uses parallelism to speed the process of computing.
* Principle of locality: program tends to reuse data and instructions they have used
recently. The rule of thumb is that program spends 90 % of its execution time in only
10% of the code. With reasonable good accuracy, prediction can be made to find what
11
instruction and data the program will use in the near future based on its accesses in the
recent past.
* Focus on the common case while making a design trade off, favor the frequent case
over the infrequent case. This principle applies when determining how to spend
resources, since the impact of the improvement is higher if the occurrence is frequent.
Amdahl’s Law: Amdahl’s law is used to find the performance gain that can be obtained by
improving some portion or a functional unit of a computer Amdahl’s law defines the speedup
that can be gained by using a particular feature.
Speedup is the ratio of performance for entire task without using the enhancement when
possible to the performance for entire task without using the enhancement. Execution
time is the reciprocal of performance. Alternatively, speedup is defined as thee ratio of
execution time for entire task without using the enhancement to the execution time for
entair task using the enhancement when possible.
Speedup from some enhancement depends an two factors:
i. The fraction of the computation time in the original computer that can be
converted to
take advantage of the enhancement. Fraction enhanced is always less than or
equal to
Example: If 15 seconds of the execution time of a program that takes 50
seconds in
total can use an enhancement, the fraction is 15/50 or 0.3
ii. The improvement gained by the enhanced execution mode; ie how much
faster the task
would run if the enhanced mode were used for the entire program. Speedup
enhanced
is the time of the original mode over the time of the enhanced mode and is
always
greater then 1.
[
Execution time new = Execution time old X (1- Fraction enhanced) + Fraction enhanced ]
Speed up enhanced
Processor is connected with a clock running at constant rate. These discrete time events
are called clock ticks or clock cycle.
CPU time for a program can be evaluated:
12
CPU time = CPU clock cycles for a program X clock cycle time
Using the number of clock cycle and the Instruction count (IC), it is possible to determine
the average number of clock cycles per instruction (CPI). The reciprocal of CPI gives
Processor performance depends on IC, CPI and clock rate or clock cycle. There 3
Example:
A System contains Floating point (FP) and Floating Point Square Root (FPSQR) unit.
FPSQR is responsible for 20% of the execution time. One proposal is to enhance the
FPSQR hardware and speedup this operation by a factor of 15 second alternate is just to
try to make all FP instructions run faster by a factor of 1.6 times faster with the same
effort as required for the fast FPSQR, compare the two design alternative
13
Option 1
SpeedupFPSQR = 1 = 1.2295
(1-0.2) + (0.2/15)
Option 2
Speedup FP = 1 = 1.2307
(1-0.5) + (0.5/1.6)
14
Pipeline
Under these conditions, the speedup from pipelining is equal to the number of stage
pipeline. In practice, the pipeline stages are not perfectly balanced and pipeline does
involve some overhead. Therefore, the speedup will be always then practically less than
the number of stages of the pipeline.
Pipeline yields a reduction in the average execution time per instruction. If the
processor is assumed to take one (long) clock cycle per instruction, then pipelining
decrease the clock cycle time. If the processor is assumed to take multiple CPI, then
pipelining will aid to reduce the CPI.
Send the content of program count (PC) to memory and fetch the current
instruction from memory to update the PC.
15
2. Instruction decode / Register fetch cycle (ID):
Decode the instruction and access the register file. Decoding is done in parallel with
reading registers, which is possible because the register specifies are at a fixed location in
a RISC architecture. This corresponds to fixed field decoding. In addition it involves:
- Perform equality test on the register as they are read for a possible branch.
The ALU operates on the operands prepared in the previous cycle and performs
one of the following function defending on the instruction type.
* Memory reference: Effective address [Base Register] + offset
* Register- Register ALU instruction: ALU performs the operation specified in
the instruction using the values read from the register file.
* Register- Immediate ALU instruction: ALU performs the operation specified in
the instruction using the first value read from the register file and that sign extended
immediate.
4. Memory access (MEM)
For a load instruction, using effective address the memory is read. For a store instruction
nd
memory writes the data from the 2 register read using effective address.
5. Write back cycle (WB)
Write the result in to the register file, whether it comes from memory system (for a
LOAD instruction) or from the ALU.
Five stage Pipeline for a RISC processor
Each instruction taken at most 5 clock cycles for the execution
* Instruction fetch cycle (IF)
* Instruction decode / register fetch cycle (ID)
* Execution / Effective address cycle (EX)
* Memory access (MEM)
* Write back cycle (WB)
16
The execution of the instruction comprising of the above subtask can be
pipelined. Each of the clock cycles from the previous section becomes a pipe stage – a
cycle in the pipeline. A new instruction can be started on each clock cycle which results
in the execution pattern shown figure 2.1. Though each instruction takes 5 clock cycles
to complete, during each clock cycle the hardware will initiate a new instruction and will
be executing some part of the five different instructions as illustrated in figure 2.1.
Figure 2.1 Simple RISC Pipeline. On each clock cycle another instruction fetched
Each stage of the pipeline must be independent of the other stages. Also, two different
operations can’t be performed with the same data path resource on the same clock. For
example, a single ALU cannot be used to compute the effective address and perform a
subtract operation during the same clock cycle. An adder is to be provided in the stage 1
to compute new PC value and an ALU in the stage 3 to perform the arithmetic indicated
in the instruction (See figure 2.2). Conflict should not arise out of overlap of instructions
using pipeline. In other words, functional unit of each stage need to be independent of
other functional unit. There are three observations due to which the risk of conflict is
reduced.
Separate Instruction and data memories at the level of L1 cache eliminates a
conflict for a single memory that would arise between instruction fetch and data
access.
Register file is accessed during two stages namely ID stage WB. Hardware should
allow to perform maximum two reads one write every clock cycle.
To start a new instruction every cycle, it is necessary to increment and store the
PC every cycle.
17
Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5
Figure 2.2 Diagram indicating the cycle and functional unit of each stage.
Figure 2.3 Functional units of 5 stage Pipeline. IF/ID is a buffer between IF and ID stage.
18
Basic Performance issues in Pipelining
Pipelining increases the CPU instruction throughput but, it does not reduce the execution
time of an individual instruction. In fact, the pipelining increases the execution time of
each instruction due to overhead in the control of the pipeline. Pipeline overhead arises
from the combination of register delays and clock skew. Imbalance among the pipe stages
reduces the performance since the clock can run no faster than the time needed for the
slowest pipeline stage.
Pipeline Hazards
Hazards may cause the pipeline to stall. When an instruction is stalled, all the instructions
issued later than the stalled instructions are also stalled. Instructions issued earlier than
the stalled instructions will continue in a normal way. No new instructions are fetched
during the stall.
Hazard is situation that prevents the next instruction in the instruction stream fromk
executing during its designated clock cycle. Hazards will reduce the pipeline
performance.
19
Performance with Pipeline stall
A stall causes the pipeline performance to degrade from ideal performance. Performance
improvement from pipelining is obtained from:
CPI pipelined = Ideal CPI + Pipeline stall clock cycles per instruction
Assume that,
i) cycle time overhead of pipeline is ignored
ii) stages are balanced
With theses assumptions,
Clock cycle unpipelined = clock cycle pipelined
Therefore, Speedup = CPI unpipelined
CPI pipelined
If all the instructions take the same number of cycles and is equal to the number of
pipeline stages or depth of the pipeline, then,
Types of hazard
Three types hazards are:
1. Structural hazard
2. Data Hazard
3. Control Hazard
20
Structural hazard
Structural hazard arise from resource conflicts, when the hardware cannot support all
possible combination of instructions simultaneously in overlapped execution. If some
combination of instructions cannot be accommodated because of resource conflicts, the
processor is said to have structural hazard.
Structural hazard will arise when some functional unit is not fully pipelined or when
some resource has not been duplicated enough to allow all combination of instructions in
the pipeline to execute.
For example, if memory is shared for data and instruction as a result, when an instruction
contains data memory reference, it will conflict with the instruction reference for a later
instruction (as shown in figure 2.5a). This will cause hazard and pipeline stalls for 1
clock cycle.
Figure 2.5a Load Instruction and instruction 3 are accessing memory in clock cycle4
21
Instruction # Clock number
1 2 3 4 5 6 7 8 9
Data Hazard
Consider the pipelined execution of the following instruction sequence (Timing diagram
shown in figure 2.6)
22
DADD instruction produces the value of R1 in WB stage (Clock cycle 5) but the DSUB
instruction reads the value during its ID stage (clock cycle 3). This problem is called Data
Hazard.
DSUB may read the wrong value if precautions are not taken. AND instruction will read
the register during clock cycle 4 and will receive the wrong results.
The XOR instruction operates properly, because its register read occurs in clock cycle 6
after DADD writes in clock cycle 5. The OR instruction also operates without incurring a
hazard because the register file reads are performed in the second half of the cycle
whereas the writes are performed in the first half of the cycle.
The DADD instruction will produce the value of R! at the end of clock cycle 3. DSUB
instruction requires this value only during the clock cycle 4. If the result can be moved
from the pipeline register where the DADD store it to the point (input of LAU) where
DSUB needs it, then the need for a stall can be avoided. Using a simple hardware
technique called Data Forwarding or Bypassing or short circuiting, data can be made
available from the output of the ALU to the point where it is required (input of LAU) at
the beginning of immediate next clock cycle.
Forwarding works as follows:
i) The output of ALU from EX/MEM and MEM/WB pipeline register is always
feedback to the ALU inputs.
ii) If the Forwarding hardware detects that the previous ALU output serves as the
source for the current ALU operations, control logic selects the forwarded
result as the input rather than the value read from the register file.
Forwarded results are required not only from the immediate previous instruction, but also
th
from an instruction that started 2 cycles earlier. The result of i instruction
th
Is required to be forwarded to (i+2) instruction also.
Forwarding can be generalized to include passing a result directly to the functional unit
that requires it.
LD R1, 0(R2)
DADD R3, R1, R4
AND R5, R1, R6
OR R7, R1, R8
The pipelined data path for these instructions is shown in the timing diagram (figure 2.7)
23
Instruction Clock number
1 2 3 4 5 6 7 8 9
LD R1, 0(R2) IF ID EXE MEM WB
DADD R3,R1,R4 IF ID EXE MEM WB
AND R5, R1, R6 IF ID EXE MEM WB
OR R7, R1, R8 IF ID EXE MEM WB
Figure 2.7 In the top half, we can see why stall is needed. In the second half, stall
created to solve the problem.
The LD instruction gets the data from the memory at the end of cycle 4. even with
forwarding technique, the data from LD instruction can be made available earliest during
clock cycle 5. DADD instruction requires the result of LD instruction at the beginning of
clock cycle 5. DADD instruction requires the result of LD instruction at the beginning of
clock cycle 4. This demands data forwarding of clock cycle 4. This demands data
forwarding in negative time which is not possible. Hence, the situation calls for a pipeline
stall.
Result from the LD instruction can be forwarded from the pipeline register to the and
instruction which begins at 2 clock cycles later after the LD instruction.
The load instruction has a delay or latency that cannot be eliminated by forwarding alone.
It is necessary to stall pipeline by 1 clock cycle. A hardware called Pipeline interlock
detects a hazard and stalls the pipeline until the hazard is cleared. The pipeline interlock
helps to preserve the correct execution pattern by introducing a stall or bubble. The CPI
for the stalled instruction increases by the length of the stall. Figure 2.7 shows the
pipeline before and after the stall.
Stall causes the DADD to move 1 clock cycle later in time. Forwarding to the AND
instruction now goes through the register file or forwarding is not required for the OR
instruction. No instruction is started during the clock cycle 4.
Control Hazard
When a branch is executed, it may or may not change the content of PC. If a branch is
taken, the content of PC is changed to target address. If a branch is taken, the content of
PC is not changed.
24
The simple way of dealing with the branches is to redo the fetch of the instruction
following a branch. The first IF cycle is essentially a stall, because, it never performs
useful work.
One stall cycle for every branch will yield a performance loss 10% to 30% depending on
the branch frequency.
Figure 2.8 The predicted-not-taken scheme and the pipeline sequence when the
branch is untaken (top) and taken (bottom).
25
3. Treat every branch as taken: As soon as the branch is decoded and target address
is computed, begin fetching and executing at the target if the branch target is
known before branch outcome, then this scheme gets advantage.
For both predicated taken or predicated not taken scheme, the compiler can
improve performance by organizing the code so that the most frequent path
matches the hardware choice.
4. Delayed branch technique is commonly used in early RISC processors.
In a delayed branch, the execution cycle with a branch delay of one is
Branch instruction
Sequential successor-1
Branch target if taken
The sequential successor is in the branch delay slot and it is executed irrespective of
whether or not the branch is taken. The pipeline behavior with a branch delay is shown in
Figure 2.9. Processor with delayed branch, normally have a single instruction delay.
Compiler has to make the successor instructions valid and useful there are three ways in
which the to delay slot can be filled by the compiler.
Figure 2.9 Timing diagram of the pipeline to show the behavior of a delayed branch
is the same whether or not the branch is taken.
26
The limitations on delayed branch arise from
i) Restrictions on the instructions that are scheduled in to delay slots.
ii) Ability to predict at compiler time whether a branch is likely to be taken or
not taken.
The delay slot can be filled from choosing an instruction
a) From before the branch instruction
b) From the target address
c) From fall- through path.
The principle of scheduling the branch delay is shown in fig 2.10
27
Types of exceptions:
The term exception is used to cover the terms interrupt, fault and exception.
I/O device request, page fault, Invoking an OS service from a user program, Integer
arithmetic overflow, memory protection overflow, Hardware malfunctions, Power failure
etc. are the different classes of exception.
Individual events have important characteristics that determine what action is needed
corresponding to that exception.
i) Synchronous versus Asynchronous
If the event occurs at the same place every time the program is executed with the
same data and memory allocation, the event is asynchronous.
Asynchronous events are caused by devices external to the CPU and memory such
events are handled after the completion of the current instruction.
ii) User requested versus coerced: User requested exceptions are predictable
and can always be handled after the current instruction has completed. Coerced
exceptions are caused by some hardware event that is not under the control of the user
program. Coerced exceptions are harder to implement because they are not predictable
iii) User maskable versus user non maskable :
If an event can be masked by a user task, it is user maskable. Otherwise it is user non
maskable.
28
Stopping and restarting execution:
The most difficult exception have 2 properties:
1. Exception that occur within instructions
2. They must be restartable
For example, a page fault must be restartable and requires the intervention of OS.
Thus pipeline must be safely shutdown, so that the instruction can be restarted in
the correct state. If the restarted instruction is not a branch, then we will continue
to fetch the sequential successors and begin their execution in the normal fashion.
Pipeline implementation
IR Mem [PC]
NPC PC+ 4
Operation: send out the [PC] and fetch the instruction from memory in to the Instruction
Register (IR). Increment PC by 4 to address the next sequential instruction.
29
2. Instruction decode / Register fetch cycle (ID)
A Regs [rs]
B Regs [rt]
Operation: decode the instruction and access that register file to read the registers
( rs and rt). File to read the register (rs and rt). A & B are the temporary registers.
Operands are kept ready for use in the next cycle.
Decoding is done in concurrent with reading register. MIPS ISA has fixed length
Instructions. Hence, these fields are at fixed locations.
Operation: The ALU performs the operation specified by the function code on the value
taken from content of register A and register B.
*. Register- Immediate ALU instruction:
Operation: the content of register A and register Imm are operated (function Op) and
result is placed in temporary register ALU output.
*. Branch:
ALU output NPC + (Imm << 2)
Cond (A == O)
30
Advanced Computer Architecture-
Hardware Based Speculation (unit 3)
Tomasulu algorithm and Reorder Buffer
Tomasulu idea:
1. Have reservation stations where register renaming is possible
2. Results are directly forwarded to the reservation station along with the final registers. This
is also called short circuiting or bypassing.
ROB:
1.The instructions are stored sequentially but we have indicators to say if it is speculative
or completed execution.
2. If completed execution and not speculative and reached head of the queue then we commit it.
Example
1. L.D F6, 34(R2)
2. L.D F2, 45(R3)
3. MUL.D F0, F2, F4
4. SUB.D F8, F2, F6
5. DIV.D F10, F0, F6
6. ADD.D F6, F8, F2
The position of Reservation stations, ROB and FP registers are indicated below:
Assume latencies load 1 clock, add 2 clocks, multiply 10 clocks, divide 40 clocks
Show data structures just before MUL.D goes to commit…
Reservation Stations
Load1 no
Load2 no
Add1 no
Add2 no
Add3 no
At the time MUL.D is ready to commit only the two L.D instructions have already committed, though others
have completed execution
Actually, the MUL.D is at the head of the ROB – the L.D instructions are shown only for understanding
purposes #X represents value field of ROB entry number X
Floating point registers
Field F0 F1 F2 F3 F4 F5 F6 F7 F8 F10
Reorder# 3 6 4 5
Busy yes no no no no no yes … yes yes
Reorder Buffer
Example
Loop: LD F0 0 R1
MULTD F4 F0 F2
SD F4 0 R1
SUBI R1 R1 #8
BNEZ R1 Loop
Notes
• If a branch is mispredicted, recovery is done by flushing the ROB of all entries that appear
after the mispredicted branch
• entries before the branch are allowed to continue
• restart the fetch at the correct branch successor
• When an instruction commits or is flushed from the ROB then the corresponding slots
become available for subsequent instructions
PU – Processing Unit
Uniprocessors
1. MIMDs offer flexibility. With the correct hardware and software support, MIMDs
can function as single-user multiprocessors focusing on high performance for one
application, as multiprogrammed multiprocessors running many tasks simultaneously, or
as some combination of these functions.
2. MIMDs can build on the cost/performance advantages of off-the-shelf
microprocessors. In fact, nearly all multiprocessors built today use the same
microprocessors found in workstations and single-processor servers.
With an MIMD, each processor is executing its own instruction stream. In many cases,
each processor executes a different process. Recall from the last chapter, that a process is
an segment of code that may be run independently, and that the state of the process
contains all the information necessary to execute that program on a processor. In a
multiprogrammed environment, where the processors may be running independent tasks,
each process is typically independent of the processes on other processors.
It is also useful to be able to have multiple processors executing a single program and
sharing the code and most of their address space. When multiple processes share code
and data in this way, they are often called threads
. Today, the term thread is often used in a casual way to refer to multiple loci of
execution that may run on different processors, even when they do not share an address
space. To take advantage of an MIMD multiprocessor with n processors, we must usually
have at least n threads or processes to execute. The independent threads are typically
identified by the programmer or created by the compiler. Since the parallelism in this
situation is contained in the threads, it is called thread-level parallelism.
Threads may vary from large-scale, independent processes–for example, independent
programs running in a multiprogrammed fashion on different processors– to parallel
iterations of a loop, automatically generated by a compiler and each executing for
perhaps less than a thousand instructions. Although the size of a thread is important in
considering how to exploit thread-level parallelism efficiently, the important qualitative
distinction is that such parallelism is identified at a high-level by the software system and
that the threads consist of hundreds to millions of instructions that may be executed in
parallel. In contrast, instruction level parallelism is identified by primarily by the
hardware, though with software help in some cases, and is found and exploited one
instruction at a time.
Existing MIMD multiprocessors fall into two classes, depending on the number of
processors involved, which in turn dictate a memory organization and interconnect
strategy. We refer to the multiprocessors by their memory organization, because what
constitutes a small or large number of processors is likely to change over time.
The first group, which we call
centralized shared-memory architectures
Centralized shared memory architectures have at most a few dozen processors in 2000.
For multiprocessors with small processor counts, it is possible for the processors to share
a single centralized memory and to interconnect the processors and memory by a bus.
With large caches, the bus and the single memory, possibly with multiple banks, can
satisfy the memory demands of a small number of processors. By replacing a single bus
with multiple buses, or even a switch, a centralized shared memory design can be scaled
to a few dozen processors. Although scaling beyond that is technically conceivable,
sharing a centralized memory, even organized as multiple banks, becomes less attractive
as the number of processors sharing it increases.
Because there is a single main memory that has a symmetric relationship to all processors
and a uniform access time from any processor, these multiprocessors are often called
symmetric (shared-memory) multiprocessors ( SMPs), and this style of architecture is
sometimes called UMA for uniform memory access. This type of centralized shared-
memory architecture is currently by far the most popular organization.
The second group consists of multiprocessors with physically distributed memory. To
support larger processor counts, memory must be distributed among the processors rather
than centralized; otherwise the memory system would not be able to support the
bandwidth demands of a larger number of processors without incurring excessively long
access latency. With the rapid increase in processor performance and the associated
increase in a processor’s memory bandwidth requirements, the scale of multiprocessor for
which distributed memory is preferred over a single, centralized memory continues to
decrease in number (which is another reason not to use small and large scale). Of course,
the larger number of processors raises the need for a high bandwidth interconnect.
Distributed-memory multiprocessor
Distributing the memory among the nodes has two major benefits. First, it is a cost-
effective way to scale the memory bandwidth, if most of the accesses are to the local
memory in the node. Second, it reduces the latency for accesses to the local memory.
These two advantages make distributed memory attractive at smaller processor counts as
processors get ever faster and require more memory bandwidth and lower memory
latency. The key disadvantage for a distributed memory architecture is that
communicating data between processors becomes somewhat more complex and has
higher latency, at least when there is no contention, because the processors no longer
share a single centralized memory. As we will see shortly, the use of distributed memory
leads to two different paradigms for interprocessor communication.
Typically, I/O as well as memory is distributed among the nodes of the multiprocessor,
and the nodes may be small SMPs (2–8 processors). Although the use of multiple
processors in a node together with a memory and a network interface is quite useful from
the cost-efficiency viewpoint.
Challenges for Parallel Processing
• Limited parallelism available in programs
– Need new algorithms that can have better parallel performance
• Suppose you want to achieve a speedup of 80 with 100 processors. What fraction
of the original computation can be sequential?
80 1
FractionParallel
(1 − FractionParallel )
100
FractionParallel 0.9975
Developed by
• IBM – One chip multiprocessor
• AMD and INTEL- Two –Processor
• SUN – 8 processor multi core
Symmetric shared – memory support caching of
• Shared Data
• Private Data
Private data: used by a single processor
When a private item is cached, its location is migrated to the cache Since no other
processor uses the data, the program behavior is identical to that in a uniprocessor.
Cache Coherence
Unfortunately, caching shared data introduces a new problem because the view of
memory held by two different processors is through their individual caches, which,
without any additional precautions, could end up seeing two different values.
I.e, If two different processors have two different values for the same location, this
difficulty is generally referred to as cache coherence problem
Cache coherence problem for a single memory location
• Informally:
– ―Any read must return the most recent write‖
– Too strict and too difficult to implement
• Better:
– ―Any write must eventually be seen by a read‖
– All writes are seen in proper order (―serialization‖)
• Two rules to ensure this:
– ―If P writes x and then P1 reads it, P’s write will be seen by P1 if the read
and write are sufficiently far apart‖
– Writes to a single location are serialized: seen in one order
• Latest write will be seen
• Otherwise could see writes in illogical order (could see older
value after a newer value)
Directory based
• Sharing status of a block of physical memory is kept in one location called the
directory.
• Directory-based coherence has slightly higher implementation overhead than
snooping.
• It can scale to larger processor count.
Snooping
• Every cache that has a copy of data also has a copy of the sharing status of the
block.
• No centralized state is kept.
• Caches are also accessible via some broadcast medium (bus or switch)
• Cache controller monitor or snoop on the medium to determine whether or not
they have a copy of a block that is represented on a bus or switch access.
Snooping protocols are popular with multiprocessor and caches attached to single shared
memory as they can use the existing physical connection- bus to memory, to interrogate
the status of the caches. Snoop based cache coherence scheme is implemented on a
shared bus. Any communication medium that broadcasts cache misses to all the
processors.
Write Invalidate
Write Update
Example Protocol
• Snooping coherence protocol is usually implemented by incorporating a finite-
state controller in each node
• Logically, think of a separate controller associated with each cache block
– That is, snooping operations or cache requests for different blocks can
proceed independently
• In implementations, a single controller allows multiple operations to distinct
blocks to proceed in interleaved fashion
– that is, one operation may be initiated before another is completed, even
through only one cache access or one bus access is allowed at time
Example Write Back Snoopy Protocol
Implementation Complications
• Write Races:
– Cannot update cache until bus is obtained
• Otherwise, another processor may get bus first,
and then write the same cache block!
– Two step process:
• Arbitrate for bus
• Place miss on bus and complete operation
– If miss occurs to block while waiting for bus,
handle miss (invalidate may be needed) and then restart.
– Split transaction bus:
• Bus transaction is not atomic:
can have multiple outstanding transactions for a block
• Multiple misses can interleave,
allowing two caches to grab block in the Exclusive state
• Must track and prevent multiple misses for one block
• Must support interventions and invalidations
Performance Measurement
• Overall cache performance is a combination of
– Uniprocessor cache miss traffic
– Traffic caused by communication – invalidation and subsequent cache
misses
• Changing the processor count, cache size, and block size can affect these
two components of miss rate
• Uniprocessor miss rate: compulsory, capacity, conflict
• Communication miss rate: coherence misses
– True sharing misses + false sharing misses
– The block is shared, but no word in the cache is actually shared, and this
miss would not occur if the block size were a single word
• Assume that words x1 and x2 are in the same cache block, which is in the
shared state in the caches of P1 and P2. Assuming the following sequence of
events, identify each miss as a true sharing miss or a false sharing miss.
Time P1 P2
1 Write x1
2 Read x2
3 Write x1
4 Write x2
5 Read x2
Example Result
• True sharing miss (invalidate P2)
• 2: False sharing miss
– x2 was invalidated by the write of P1, but that value of x1 is not used in
P2
• 3: False sharing miss
– The block containing x1 is marked shared due to the read in P2, but P2 did
not read x1. A write miss is required to obtain exclusive access to the
block
• 4: False sharing miss
• 5: True sharing miss
• In addition to tracking the state of each cache block, we must track the processors
that have copies of the block when it is shared (usually a bit vector for each
memory block: 1 if processor has copy)
• Keep it simple(r):
– Writes to non-exclusive data => write miss
– Processor blocks until access completes
– Assume messages received and acted upon in order sent
– Read miss: owner processor sent data fetch message, causing state of block
in owner’s cache to transition to Shared and causes owner to send data to
directory, where it is written to memory & sent back to requesting
processor.
Identity of requesting processor is added to set Sharers, which still
contains the identity of the processor that was the owner (since it still has a
readable copy). State is shared.
– Data write-back: owner processor is replacing the block and hence must
write it back, making memory copy up-to-date
(the home directory essentially becomes the owner), the block is now
Uncached, and the Sharer set is empty.
– Write miss: block has a new owner. A message is sent to old owner causing
the cache to send the value of the block to the directory from which it is
sent to the requesting processor, which becomes the new owner. Sharers is
set to identity of new owner, and state of block is made Exclusive.
Goal is provide a memory system with cost per byte than the next lower level
• Each level maps addresses from a slower, larger memory to a smaller but faster
memory higher in the hierarchy.
– Address mapping
– Address checking.
• Hence protection scheme for address for scrutinizing addresses are
also part of the memory hierarchy.
Memory Hierarchy
Capacit
Cach
Spee
1 ns
Block
Main Memory
512 MB Memor 100ns
Page
Disk
100 GB I/O
5 ms
File Large
??? Lower
100,000
10,000
Performance
1,000
Process or
100
10
Mem ory
1
1980 1985 1990 1995 2000 2005 2010
Year
• The importance of memory hierarchy has increased with advances in performance
of processors.
• Prototype
– When a word is not found in cache
• Fetched from memory and placed in cache with the address tag.
• Multiple words( block) is fetched for moved for efficiency reasons.
– key design
• Set associative
– Set is a group of block in the cache.
– Block is first mapped on to set.
» Find mapping
» Searching the set
Chosen by the address of the data:
(Block address) MOD(Number of sets in cache)
• n-block in a set
– Cache replacement is called n-way set associative.
Cache data
- Cache read.
- Cache write.
Write through: update cache and writes through to update memory.
Both strategies
- Use write buffer.
this allows the cache to proceed as soon as the data is placed in the
buffer rather than wait the full latency to write the data into memory.
Metric
used to measure the benefits is miss rate
No of access that miss
No of accesses
Write back: updates the copy in the cache.
• Causes of high miss rates
– Three Cs model sorts all misses into three categories
• Compulsory: every first access cannot be in cache
– Compulsory misses are those that occur if there is an infinite
cache
• Capacity: cache cannot contain all that blocks that are needed for
the program.
– As blocks are being discarded and later retrieved.
• Conflict: block placement strategy is not fully associative
– Block miss if blocks map to its set.
Miss rate can be a misleading measure for several reasons
So, misses per instruction can be used per memory reference
Misses = Miss rate X Memory accesses
Instruction Instruction count
- Increases larger hit time for larger cache memory and higher cost and power.
2.50
Access time (ns)
1.50
1.00
0.50
-
16 KB 32 KB 64 KB 128 KB 256 KB 512 KB 1 MB
Cache size
Hit Time
Miss Penalty
Way-Miss Hit Time
– Multiplexer is set early to select desired block, only 1 tag comparison
performed that clock cycle in parallel with reading the cache data
– Miss ⇒ 1st check other blocks for matches in next clock cycle
• Accuracy ≈ 85%
• Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles
– Used for instruction caches vs. data caches
Third optimization: Trace Cache
• Find more instruction level parallelism?
How to avoid translation from x86 to microops?
• Trace cache in Pentium 4
1. Dynamic traces of the executed instructions vs. static sequences of instructions as
determined by layout in memory
– Built-in branch predictor
2. Cache the micro-ops vs. x86 instructions
– Decode/translate from x86 to micro-ops on trace cache miss
+ 1. ⇒ better utilize long blocks (don’t exit in middle of block, don’t enter at label
in middle of block)
1.8
1.6
1.4
0->1
1.2
1->2
1
2->64
0.8
Base
0.6
0.4
0.2
0
compress
espresso
spice2g6
swm256
hydro2d
tomcatv
mdljsp2
mdljdp2
su2cor
fpppp
nasa7
doduc
wave5
alvinn
eqntott
xlisp
ear
ora
• Banking works best when accesses naturally spread themselves across banks ⇒
mapping of addresses to banks affects behavior of memory system
• Simple mapping that works well is ―sequential interleaving‖
– Spread block addresses sequentially across banks
– E,g, if there 4 banks, Bank 0 has all blocks whose address modulo 4 is 0;
bank 1 has all blocks whose address modulo 4 is 1; …
Seventh optimization :Reduce Miss Penalty: Early Restart and Critical
Word First
• Don’t wait for full block before restarting CPU
• Early restart—As soon as the requested word of the block arrives, send it to the
CPU and let the CPU continue execution
– Spatial locality ⇒ tend to want next sequential word, so not clear size of
benefit of just early restart
• Critical Word First—Request the missed word first from memory and send it to
the CPU as soon as it arrives; let the CPU continue execution while filling the rest
of the words in the block
– Long blocks more popular today ⇒ Critical Word 1st Widely used
block
0.1
0
0 50 100 150
Blocking Factor
2.20 1.97
2.00
1.80
1.49
1.60 1.45 1.40
1.32
1.40 1.26 1.29
1.16 1.18 1.20 1.21
1.20
1.00
d el u d
gap mcf ise lg rec ke
ua
fam3 pw ga swim appl lucas mgri q
SPECint2000 wu face SPECf p2000 e
The techniques to improve hit time, bandwidth, miss penalty and miss rate
generally affect the other components of the average memory access equation as
well as the complexity of the memory hierarchy.
Advanced Computer Architecture - Virtual Memory
•Virtual memory – This is the concept of separation of logical memory from physical memory.
•Only a part of the program needs to be in memory for execution. Hence, logical address
space can be much larger than physical address space.
•Allows address spaces to be shared by several processes (or threads).
•Allows for more efficient process creation.
•Implementation
There are two main methods of implementing Virtual memory
1. Demand paging
2. Demand segmentation
Virtual Address
•The concept of a virtual (or logical) address space that is bound to a separate physical
address space is central to memory management
–
Virtual address is generated by the CPU; Here the CPU assumes the entire memory space to
(no. of address lines)
be available as allowed by the number of addresses- 2
–Physical address is the address actually seen by the physical memory
•Virtual and physical addresses are the same in compile-time and load-time address-binding
schemes; virtual and physical addresses differ in execution-time address-binding schemes
Replacement:
–Cache Miss by HW
–Page fault by OS
•Processor address size is VM size
–For cache it is small and no connection
•Cache only acts as memory
–Secondary memory is also file system
Parameters of Cache and VM
•Block size: 16 vs 4096
•Hit time: 2 vs 200 clk cycles
•Miss penalty: 100 vs 1,000,000 clk cycles
•Access time: 20 vs 2,000,000 clk cycles
•Transfer time: 20 vs 2,000,000 clk cycles
•Miss rate: 1% vs 0.0001%
•Address mapping: Phy to cache vs Virtual address to Physical address
Page Table and Address Translation
Page Table Structure
Examples
Address
Space
Number <8> <4><1> <35> <31>
ASN Pr V Tag PPN
...
...
128:1 mux
=
44-bit physical address
Advanced Computer Architecture-Memory Hierarchy Design
AMAT and Processor Performance
•AMAT = Average Memory Access Time
•Miss-oriented Approach to Memory Access
–CPIExec includes ALU and Memory instructions
•Separating out Memory component entirely
–CPIALUOps does not include memory instructions
Summary: Caches
•The Principle of Locality:
–Program access a relatively small portion of the address space at any instant of
time.
•Temporal Locality OR Spatial Locality:
•Three Major Categories of Cache Misses:
–Compulsory Misses: sad facts of life. Example: cold start misses.
–Capacity Misses: increase cache size
–Conflict Misses: increase cache size and/or associativity
0.14
1-way
Conflict
0.12
2-way
0.1
4-way
0.08
8-way
0.06
Capacity
0.04
0.02
0
64
128
16
32
1
Cache Optimisation
Why improve Cache performance:
block
Long blocks more popular today ⇒ Critical Word 1st Widely used
Loop Interchange
•Motivation: some programs have nested loops that access data in nonsequential order
•Solution: Simply exchanging the nesting of the loops can make the code access the data in
the order it is stored =>
reduce misses by improving spatial locality; reordering maximizes use of data in a
cache block before it is discarded
Loop Interchange Example
/* Before */
for (j = 0; j < 100; j = j+1)
for (i = 0; i < 5000; i =
i+1) x[i][j] = 2 * x[i][j];
/* After */
for (i = 0; i < 5000; i =
i+1) for (j = 0; j < 100; j =
j+1) x[i][j] = 2 * x[i][j];
Blocking
•Motivation: multiple arrays, some accessed by rows and some by columns
•Storing the arrays row by row (row major order) or column by column (column major
order) does not help: both rows and columns are used in every iteration of the loop
(Loop Interchange cannot help)
•Solution: instead of operating on entire rows and columns of an array, blocked
algorithms operate on submatrices or blocks => maximize accesses to the data loaded
into the cache before the data is replaced
Blocking Example
/* Before */
for (i = 0; i < N; i = i+1)
for (j = 0; j < N; j = j+1)
{r = 0;
for (k = 0; k < N; k = k+1){
r = r + y[i][k]*z[k][j];};
x[i][j] = r;
};
/* After */
for (jj = 0; jj < N; jj = jj+B)
for (kk = 0; kk < N; kk =
kk+B) for (i = 0; i < N; i = i+1)
for (j = jj; j < min(jj+B,N); j =
j+1) {r = 0;
for (k = kk; k < min(kk+B,N); k = k + 1)
r = r + y[i][k]*z[k][j];
x[i][j] = x[i][j] +
r; };
Snapshot of x, y, z
when i=1
Merging Arrays
•Motivation: some programs reference multiple arrays in the same dimension with the same
indices at the same time =>
these accesses can interfere with each other,leading to conflict misses
•Solution: combine these independent matrices into a single compound array, so that a single
cache block can contain the desired elements
Merging Arrays Example
Loop Fusion
Some programs have separate sections of code that access with the
same loops, performing different computations on the common data
Solution:
―Fuse‖ the code into a single loop =>
the data that are fetched into the cache can be used repeatedly before being
swapped out => reducing misses via improved temporal locality
Loop Fusion Example
Summary of Compiler Optimizations- to Reduce Cache Misses (by hand)
Prefetching relies on having extra memory bandwidth that can be used without
penalty
• Instruction Prefetching
– Typically, CPU fetches 2 blocks on a miss: the requested block and the
next consecutive block.
– Requested block is placed in instruction cache when it returns, and prefetched
block is placed into instruction stream buffer
Data Prefetching
– Pentium 4 can prefetch data into L2 cache from up to 8 streams from 8
different 4 KB pages
– Prefetching invoked if 2 successive L2 cache misses to a page,if
distance between those cache blocks is < 256 bytes
DRAM Technology
• Semiconductor Dynamic Random Access Memory
• Emphasize on cost per bit and capacity
• Multiplex address lines cutting # of address pins in half
– Row access strobe (RAS) first, then column access strobe (CAS)
– Memory as a 2D matrix – rows go to a buffer
– Subsequent CAS selects subrow
• Use only a single transistor to store a bit
– Reading that bit can destroy the information
– Refresh each bit periodically (ex. 8 milliseconds) by writing back
• Keep refreshing time less than 5% of the total time
• DRAM capacity is 4 to 8 times that of SRAM
RAS improvement
SRAM Technology
• Cache uses SRAM: Static Random Access Memory
• SRAM uses six transistors per bit to prevent the information from being disturbed
when read
no need to refresh
– SRAM needs only minimal power to retain the charge in
the standby mode good for embedded applications
– No difference between access time and cycle time
for SRAM
• Emphasize on speed and capacity
– SRAM address lines are not multiplexed
• SRAM speed is 8 to 16x that of DRAM
–the dependence between the two statements in the loop is no longer loop-carried and
iterations of the loop may be executed in parallel
–
Loop-Carried Dependence Detection: affine array index: a x i+b
To detect loop-carried dependence in a loop, the Greatest Common Divisor (GCD) test
can be used by the compiler, which is based on the following:
If an array element with index: a x i + b is stored and element: c x i + d of
the same array is loaded later where index runs from m to n, a dependence exists if
the following two conditions hold:
Points-to analysis
Relies on information from three major sources:
1. Type information, which restricts what a pointer can point to.
2. Information derived when an object is allocated or when the address of an object is
taken, which can be used to restrict what a pointer can point to. For example, if p
always points to an object allocated in a given source line and q never points to that
object, then p and q can never point to the same object.
3. Information derived from pointer assignments. For example, if p may be assigned
the value of q, then p may point to anything q points to.
Eliminating dependent
computations
copy propagation, used to simplify sequences like the
following: DADDUI R1,R2,#4
DADDUI
R1,R1,#4 to
DADDUI R1,R2,#8
Recurrence
Recurrences are expressions whose value on one iteration is given by a function that
depends onthe previous iterations.
sum = sum + x;
sum = sum + x1 + x2 + x3 + x4 + x5;
If unoptimized requires five dependent operations, but it can be rewritten as
sum = ((sum + x1) + (x2 + x3)) + (x4 + x5);
evaluated in only three dependent operations.
Software pipelining
•Symbolic loop unrolling
•Benefits of loop unrolling with reduced code size
•Instructions in loop body selected from different loop iterations
•Increase distance between dependent instructions in
Software pipelined loop
Loop: SD F4,16(R1) #store to v[i]
ADDD F4,F0,F2 #add to v[i-1]
LD F0,0(R1) #load v[i-2]
ADDI R1,R1,-8
BNE R1,R2,Loop
5 cycles/iteration (with dynamic scheduling and renaming)
Need startup/cleanup code
Software
pipelining (cont.)
SW pipelining example
Iteration i: L.D F0,0(R1)
ADD.D F4,F0,F2
S.D F4,0(R1)
Iteration i+1: L.D F0,0(R1)
ADD.D F4,F0,F2
S.D F4,0(R1)
Iteration i+2: L.D F0,0(R1)
ADD.D F4,F0,F2
S.D F4,0(R1)
Advantages
•Less code space than conventional unrolling
•Loop runs at peak speed during steady state
•Overhead only at loop initiation and termination
•Complements unrolling
Disadvantages
•Hard to overlap long latencies
•Unrolling combined with SW pipelining
•Requires advanced compiler transformations
Global Code Scheduling
•Global code scheduling aims to compact a code fragment with internal control structure
into the shortest possible sequence that preserves the data and control dependences.
Global code scheduling
•aims to compact a code fragment with internal control
–structure into the shortest possible sequence
–that preserves the data and control dependences
•Data dependences are overcome by unrolling
•In the case of memory operations, using dependence analysis to determine if two
references refer to the same address.
•Finding the shortest possible sequence of dependent instructions- critical path
•Reduce the effect of control dependences arising from conditional nonloop branches by
moving code.
•Since moving code across branches will often affect the frequency of execution of such
code, effectively using global code motion requires estimates of the relative frequency of
different paths.
•if the frequency information is accurate, is likely to lead to faster code.
Global code scheduling- cont.
•Global code motion is important since many inner loops contain conditional statements.
•Effectively scheduling this code could require that we move the assignments to B and C to
earlier in the execution sequence, before the test of A.
Trace Scheduling:
•Focusing on the Critical Path
LD R4,0(R1) ;load A
LD R5,0(R2) ;load B
DADDU R4,R4,R5 ;Add to A
SD R4,0(R1) ;Store A
...
BNEZ R4,elsepart ;Test A
... ;then part
SD ...,0(R2) ;Stores to B
...
J join ;jump over else
elsepart: ... ;else part
X ;code for X
...
join: ... ;after if
SD ...,0(R3) ;store C[i]
Trace Scheduling,
Superblocks and Predicated Instructions
•For processor issuing more than one instruction on every clock cycle.
–Loop unrolling,
–software pipelining,
–trace scheduling, and
–superblock scheduling
Trace Scheduling
•Used when
– Predicated execution is not supported
– Unrolling is insufficient
• Best used
– If profile information clearly favors one path over the other
• Significant overheads are added to the infrequent path
• Two steps :
– Trace Selection
– Trace Compaction
Trace Selection
Likely sequence of basic blocks that can be put together
– Sequence is called a trace
• What can you select?
– Loop unrolling generates long traces
– Static branch prediction forces some straight-line code behavior
Trace Selection
(cont.)
Trace Example
If the shaded portion in
previous code was
frequent path and it
was unrolled 4 times :
• Trace exits are jumps
off the frequent path
• Trace entrances are
returns to the trace
Trace Compaction
•Squeeze these into smaller number of wide instructions
• Move operations as early as it can be in a trace
• Pack the instructions into as few wide instructions as possible
• Simplifies the decisions concerning global code motion
– All branches are viewed as jumps into or out of the trace
• Bookkeeping
– Cost is assumed to be little
• Best used in scientific code with
extensive loops
Superblock
Construction
• Tail duplication
– Creates a separate
block that
corresponds to the
portion of trace after
the entry
• When proceeding
as per prediction
– Take the path of
superblock code
• When exit from
superblock
– Residual loop
that handles rest of
the iterations
Analysis on Superblocks
• Reduces the complexity of bookkeeping and scheduling
– Unlike the trace approach
• Can have larger code size though
• Assessing the cost of duplication
• Compilation process is not simple any more
Example
if (A==0)
S = T;
Straightforward Code
BNEZ R1, L;
ADDU R2, R3, R0
L:
Conditional Code
CMOVZ R2, R3, R1
Annulled if R1 is not 0
Conditional Instruction …
• Can convert control to data dependence
• In vector computing, it’s called if conversion.
• Traditionally, in a pipelined system
– Dependence has to be resolved closer to front of pipeline
• For conditional execution
– Dependence is resolved at end of pipeline, closer to the register write
Another example
• A = abs(B)
if (B < 0)
A = -B;
else
A = B;
• Two conditional moves
• One unconditional and one conditional move
• The branch condition has moved into the
instruction
– Control dependence becomes data dependence
Example
• Assume : Two issues, one to ALU and one to memory; or branch by itself
Predicated Execution
Assume : LWC is predicated load and loads if third operand is not 0
• One instruction issue slot is eliminated
• On mispredicted branch, predicated instruction will not have any effect
• If sequence following the branch is short, the entire block of the code can be predicated
Some Complications
• Exception Behavior
– Must not generate exception if the predicate is false
• If R10 is zero in the previous example
– LW R8, 0(R10) can cause a protection fault
• If condition is satisfied
– A page fault can still occur
• Biggest Issue – Decide when to annul an instruction
– Can be done during issue
• Early in pipeline
• Value of condition must be known early, can induce stalls
– Can be done before commit
• Modern processors do this
• Annulled instructions will use functional resources
• Register forwarding and such can complicate implementation
Example
if (A==0)
S = T;
Straightforward Code
BNEZ R1, L;
ADDU R2, R3, R0
L:
Conditional Code
CMOVZ R2, R3, R1
Annulled if R1 is not 0
Conditional Instruction …
• Can convert control to data dependence
• In vector computing, it’s called if conversion.
• Traditionally, in a pipelined system
– Dependence has to be resolved closer to front of pipeline
• For conditional execution
– Dependence is resolved at end of pipeline, closer to the register write
Another example
• A = abs(B)
if (B < 0)
A = -B;
else
A = B;
• Two conditional moves
• One unconditional and one conditional move
• The branch condition has moved into the
instruction
– Control dependence becomes data dependence
Example
• Assume : Two issues, one to ALU and one to memory; or branch by itself
• Wastes a memory operation slot in second cycle
• Can incur a data dependence stall if branch is not taken
– R9 depends on R8
Predicated Execution
Assume : LWC is predicated load and loads if third operand is not 0
• One instruction issue slot is eliminated
• On mispredicted branch, predicated instruction will not have any effect
• If sequence following the branch is short, the entire block of the code can be predicated
Predication
Some Complications
• Exception Behavior
– Must not generate exception if the predicate is false
• If R10 is zero in the previous example
– LW R8, 0(R10) can cause a protection fault
• If condition is satisfied
– A page fault can still occur
• Biggest Issue – Decide when to annul an instruction
– Can be done during
issue -- Early in pipeline
• Value of condition must be known early, can induce stalls
– Can be done before commit
• Modern processors do this
• Annulled instructions will use functional resources
• Register forwarding and such can complicate implementation
Exception classes
•Recoverable: exception from speculative instruction may harm performance, but not
preciseness
•Unrecoverable: exception from speculative instruction compromises preciseness
Solution I: Ignore exceptions
HW/SW solution
•Instruction causing exception returns undefined value
•Value not used if instruction is speculative
•Incorrect result if instruction is non-speculative
–Compiler generates code to throw regular exception
•Rename registers receiving speculative results
Solution I: Ignore exceptions
Example
LD R1,0(R3) ;load A
SLD R4,0(R2) ;speculative load B
BEQZ R1,L3 ;test A
ADDI R4,R1, #4 ;else
L3:SDR4,0(R3) ;store A
Solution IV HW
mechanism like a ROB
•Instructions are marked as speculative
•How many branches speculatively moved
•Action (T/NT) assumed by compiler
•Usually only one branch
•Other functions like a ROB
HW support for Memory Reference Speculation
•Moving stores across loads
–To avoid address conflict
–Special instruction checks for address conflict
•Left at original location of load instruction
•Acts like a guardian
•On speculative load HW saves address
–Speculation failed if a stores changes this address before check instruction
•Fix-up code re-executes all speculated instructions
Chip Layout
•Itanium Architecture Diagram
•
Itanium Specs
•4 Integer ALU's
•4 multimedia ALU's
•2 Extended Precision FP Units
•2 Single Precision FP units
•2 Load or Store Units
•3 Branch Units
•10 Stage 6 Wide Pipeline
•32k L1 Cache
•96K L2 Cache
•4MB L3 Cache(extern)þ
•800Mhz Clock
Intel Itanium
•800 MHz
•10 stage pipeline
•Can issue 6 instructions (2 bundles) per cycle
•4 Integer, 4 Floating Point, 4 Multimedia, 2 Memory, 3 Branch Units
•32 KB L1, 96 KB L2, 4 MB L3 caches
•2.1 GB/s memory bandwidth
Itanium2 Specs
•6 Integer ALU's
•6 multimedia ALU's
•2 Extended Precision FP Units
•2 Single Precision FP units
•2 Load and Store Units
•3 Branch Units
•8 Stage 6 Wide Pipeline
•32k L1 Cache
•256K L2 Cache
•3MB L3 Cache(on die)þ
•1Ghz Clock initially
–Up to 1.66Ghz on Montvale
Itanium2 Improvements
•Initially a 180nm process
–Increased to 130nm in 2003
–Further increased to 90nm in 2007
•Improved Thermal Management
•Clock Speed increased to 1.0Ghz
•Bus Speed Increase from 266Mhz to 400Mhz
•L3 cache moved on die
–Faster access rate
IA-64 Pipeline Features
•Branch Prediction
–Predicate Registers allow branches to be turned on or off
–Compiler can provide branch prediction hints
•Register Rotation
–Allows faster loop execution in parallel
•Predication Controls Pipeline Stages
Cache Features
•L1 Cache
–4 way associative
–16Kb Instruction
–16Kb Data
•L2 Cache
–Itanium
•6 way associative
•96 Kb
–Itanium2
•8 way associative
•256 Kb Initially
–256Kb Data and 1Mb Instruction on Montvale!
Cache Features
•L3 Cache
–Itanium
•4 way associative
•Accessible through FSB
•2-4Mb
–Itanium2
•2 – 4 way associative
•On Die
•3Mb
–Up to 24Mb on Montvale chips(12Mb/core)!
Register
Specification
128, 65-bit General Purpose Registers
128, 82-bit Floating Point Registers
128, 64-bit Application Registers 8,
64-bit Branch Registers
64, 1-bit Predicate Registers
Register Model
128 General and Floating Point Registers
32 always available, 96 on stack
As functions are called, compiler allocates a specific number of local and
output registers to use in the function by using register allocation instruction
“Alloc”. Programs renames registers to start from 32 to 127.
Register Stack Engine (RSE) automatically saves/restores stack to memory
when needed
RSE may be designed to utilize unused memory bandwidth to perform register spill
and fill operations in the background
Register Stack
On function call, machine shifts register window such that previous output registers
become new locals starting at r32
Registers
Instruction
Encoding
=
Five execution unit
slots
The code scheduled to minimize the number of cycles assuming one bundle executed
per cycle
Predication
C code: Tranditional IA64
if( condition ) {
… compare
compare
} else {
… then (p1)
}
… p1 p2
else (p2)
Control speculation
Not all the branches can be removed using predication.
Loads have longer latency than most instructions and tend to start time-
critical chains of instructions
Constraints on code motion on loads limit parallelism
Non-EPIC architectures constrain motion of load instruction
IA-64: Speculative loads, can safely schedule load instruction before one or
more prior branches
Control Speculation
Exceptions are handled by setting NaT (Not a Thing) in target
register Check instruction-branch to fix-up code if NaT flag set
Fix-up code: generated by compiler, handles exceptions NaT
bit propagates in execution (almost all IA-64 instructions)
NaT propagation reduces required check points
Speculative Load
Load instruction (ld.s) can be moved outside of a basic block even if branch target
is not known
Speculative loads does not produce exception - it sets the NaT
Check instruction (chk.s) will jump to fix-up code if NaT is set
Data Speculation
The compiler may not be able to determine the location in memory being
referenced (pointers)
Data Speculation
1. Allows for loads to be moved ahead of stores even if the compiler is unsure if
addresses are the same
2. A speculative load generates an entry in the ALAT
3. A store removes every entry in the ALAT that have the same address
4. Check instruction will branch to fix-up if the given address is not in the ALAT
Ld.a Store
Add Reg# Add# Remove Check
entries entries
Traditi onal IA-64
ALAT
Register Model
128 General and Floating Point Registers
32 always available, 96 on stack
As functions are called, compiler allocates a specific number of local and output
registers to use in the function by using register allocation instruction “Alloc”.
Programs renames registers to start from 32 to 127.
Register Stack Engine (RSE) automatically saves/restores stack to memory when
needed
RSE may be designed to utilize unused memory bandwidth to perform register spill
and fill operations in the background
Register Stack
On function call, machine shifts register window such that previous output registers
become new locals starting at r32
Software Pipelining
loops generally encompass a large portion of a program’s execution time, so it’s
important to expose as much loop-level parallelism as possible.
Overlapping one loop iteration with the next can often increase the parallelism.
Software Pipelining
We can implement loops in parallel by resolve some
problems. Managing the loop count,
Handling the renaming of registers for the pipeline,
Finishing the work in progress when the loop ends,
Starting the pipeline when the loop is entered, and
Unrolling to expose cross-iteration parallelism.
•IA-64 gives hardware support to compilers managing a software pipeline
•Facilities for managing loop count, loop termination, and rotating registers
“The combination of these loop features and predication enables the compiler to generate
compact code, which performs the essential work of the loop in a highly parallel form.”