0% found this document useful (0 votes)
85 views

Unit - 4 Pushdown Automata: Code Optimization and Code Generation

The document discusses code optimization and code generation techniques. It covers topics like introduction to optimization, machine-independent and machine-dependent optimizations, and various optimization techniques like constant folding, common subexpression elimination, copy propagation, code motion, peephole optimization, dead code elimination, and reduction in strength. Peephole optimization is described as examining short sequences of target instructions to replace them with shorter or faster sequences.

Uploaded by

Abhilash Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views

Unit - 4 Pushdown Automata: Code Optimization and Code Generation

The document discusses code optimization and code generation techniques. It covers topics like introduction to optimization, machine-independent and machine-dependent optimizations, and various optimization techniques like constant folding, common subexpression elimination, copy propagation, code motion, peephole optimization, dead code elimination, and reduction in strength. Peephole optimization is described as examining short sequences of target instructions to replace them with shorter or faster sequences.

Uploaded by

Abhilash Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Unit – 4 Chapter 6

Pushdown
Code optimization and code generation

Automata

Chapter – 6 : Code Optimization & Generation 1 Bahir Dar Institute of Technology


Topics to be covered
▪ Introduction
▪ Optimization technique
▪ Peephole optimization
▪ Loops in flow graph
▪ Code generations

Chapter – 6 : Code Optimization & Generation 2 Bahir Dar Institute of Technology


Code optimization: Introduction
▪ Optimization is a program transformation technique, which tries to
improve the code by making it consume less resources (i.e. CPU,
Memory) and deliver high speed.
▪ The code produced by the straight forward compiling algorithms
can often be made to run faster or take less space, or both.
▪ This improvement is achieved by program transformations that are
traditionally called optimizations.
▪ Compilers that apply code-improving transformations are called
optimizing compilers.

Code optimizer position

Chapter – 6 : Code Optimization & Generation 3 Bahir Dar Institute of Technology


Code optimization
▪ Optimizations are classified into two categories.
• Machine independent optimizations:
▪ are program transformations that improve the target code without
taking into consideration any properties of the target machine.
• Machine dependent optimizations:
▪ are based on register allocation and utilization of special machine-
instruction sequences.
▪ Both categories are performed regarding to:
• memory space and
• speed
▪ The goal of optimization is:
• Produce Better Code
• Fewer instructions
• Faster Execution

• Do Not Change Behavior of Program!


Chapter – 6 : Code Optimization & Generation 4 Bahir Dar Institute of Technology
Optimization Techniques
▪ A vast range of optimizations has been applied and studied.
▪ Some optimizations provided by a compiler (principal sources of
optimizations ) includes are:
• Compile time evaluation
• Constant folding
• Constant propagation
• Dead code elimination
• Arithmetic simplification
• Copy propagation
• Common sub-expression elimination
• Code motion
• Peep-hole optimization

Chapter – 6 : Code Optimization & Generation 5 Bahir Dar Institute of Technology


Compile time evaluation
▪ Compile time evaluation means shifting of computations from run time
to compile time. That is,
▪ The process of evaluating constant expressions at compile time and
replace the constant expressions by their values
▪ There are two methods used to obtain the compile time evaluation.
Constant Folding
▪ In the folding technique the computation of constant is done at compile
time instead of run time.
Example : length = (22/7)*d
▪ Here folding is implied by performing the computation of 22/7 at
compile time.
Constant propagation
▪ In this technique the value of variable is replaced and computation of an
expression is done at compilation time.
Example : pi = 3.14; r = 5;
Area = pi * r * r;
▪ Here at the compilation time the value of pi is replaced by 3.14 and r by
5 then computation of 3.14 * 5 * 5 is done during compilation.
Chapter – 6 : Code Optimization & Generation 6 Bahir Dar Institute of Technology
Common sub expressions elimination
▪ The common sub expression is an expression appearing
repeatedly in the program which is computed previously.
▪ If the operands of this sub expression do not get changed at
all then result of such sub expression is used instead of re-
computing it each time.
▪ Example:
t1 := 4 * i
t2 := a[t1]
t3 := 4 * j
t4 : = 4 * i X
t5:= n
t6 := b[t4]+t5 t4 is replaced by t1 in b[t4]

Chapter – 6 : Code Optimization & Generation 7 Bahir Dar Institute of Technology


Copy Propagation
▪ Copy propagation means use of one variable instead of another.
▪ The idea behind the copy-propagation transformation is to use v
for u, wherever possible after the copy statement u = v.
 Deals with copies to temporary variables, a = b.
◼ Compilers generate lots of copies themselves in intermediate form.
◼ Copy propagation is the process of removing them and replacing
them with references to the original.
◼ One advantage of copy propagation is that it often turns the copy statement
into dead code
Example 1: y = pi; X
area = y * r * r; area = pi * r * r;

Example 2: Before After


t0 = P + A t1 = P + A
t1 = t0
Chapter – 6 : Code Optimization & Generation 8 Bahir Dar Institute of Technology
Code Motion

▪ Optimization can be obtained by moving some amount of code


outside the loop and placing it just before entering in the loop.
▪ This method is also called loop invariant computation if the same
value is computed on every iteration of the loop

▪ Example: While(i<=max-1) N=max-1;


{ While(i<=N)
sum=sum+a[i]; {
} sum=sum+a[i];
}

Chapter – 6 : Code Optimization & Generation 9 Bahir Dar Institute of Technology


Reduction in Strength
▪ priority of certain operators is higher than others.
▪ For instance strength of * is higher than +.
▪ In this technique the higher strength operators can be replaced by
lower strength operators.
▪ Example-1: for(i=1;i<=50;i++) temp=7;
{ for(i=1;i<=50;i++)
count = i*7; {
} count = temp;
temp = temp+7;
}
▪ Here we get the count values as 7, 14, 21…. and so on.
▪ Example-2: i / 2 = (int) (i * 0.5) or i/2= i * 0.5
0–i=-i
f * 2 = 2.0 * f = f + f
f/0.2 = f * 0.5
NB: f – floating point number, i = integer
Chapter – 6 : Code Optimization & Generation 10 Bahir Dar Institute of Technology
Reduction in strength

▪ Certain machine instructions are cheaper than the other.


▪ In order to improve performance of the intermediate code we can
replace these instructions by equivalent cheaper instruction.
▪ For example, x2 is cheaper than x * x since the former calls an
exponential routine (መጽሐፉ 2 አይነት ሃሳብ አለው ገጽ 565/804 እና 609/804 እይ)
▪ floating-point division by a constant can be replaced by
multiplication by a constant.
▪ Similarly addition and subtraction are cheaper than multiplication
and division. So we can add effectively equivalent addition and
subtraction for multiplication and division.

Chapter – 6 : Code Optimization & Generation 11 Bahir Dar Institute of Technology


Dead code elimination

▪ The variable is said to be dead/useless code at a point in a


program if the value contained into it is never been used.
▪ The code containing such a variable supposed to be a dead code.
▪ Example:
i=0;
if(i==1)
{
Dead Code
a=x+5;
}
▪ If statement is a dead code as this condition will never get
satisfied hence, statement can be eliminated and optimization can
be done.

Chapter – 6 : Code Optimization & Generation 12 Bahir Dar Institute of Technology


Peephole optimization
▪ A statement-by-statement code-generation strategy often
produces target code that contains redundant instructions and
suboptimal constructs.
▪ Peephole optimization is a simple and effective technique for
locally improving target code.
▪ This technique is applied to improve the performance of the
target program by examining the short sequence of target
instructions (called the peephole) and replacing these instructions
by shorter or faster sequence whenever possible.
▪ Peephole is a small, moving window on the target program.
▪ Some examples of a program transformations that are
characteristic of peephole optimizations:
• redundant-instruction (load & store) elimination
• flow-of-control optimizations
• algebraic simplifications
• use of machine idioms
Chapter – 6 : Code Optimization & Generation 13 Bahir Dar Institute of Technology
Redundant Loads & Stores
▪ Especially the redundant loads and stores can be eliminated in following
type of transformations.
▪ Example-1:  Example-2:
MOV R0, x  before
Push Rx
MOV x,R0
pop Rx
 After
… nothing …
▪ We can eliminate the second instruction since x is already in R0.
▪ So it be came MOV R0 , x
▪ Example-3:
LD R0 , a
ST a, R0
▪ we can delete the store instruction because whenever it is executed, the
first instruction will ensure that the value of a has already been loaded
into register R0.

Chapter – 6 : Code Optimization & Generation 14 Bahir Dar Institute of Technology


Flow of Control Optimization
▪ The intermediate code generation algorithms frequently produces
jumps to jumps, jumps to conditional jumps, or conditional jumps
to jumps.
▪ The unnecessary jumps can be eliminated in either the intermediate
code or the target code by following types of peephole optimizations.
▪ We can replace the a jump to a jump sequence.
Goto L1 Goto L2
…… ……
L1: goto L2 L1: goto L2
▪ If there are now no jumps to L1, then It may be possible to eliminate
the statement L1: goto L2 provided it is preceded by an unconditional
jump. Similarly, the sequence can be replaced by:
if debug = 1 goto L1
if debug != 1 goto L2
goto L2
L 1: print debugging information
L 1: print debugging information
L2:
L2:
Chapter – 6 : Code Optimization & Generation 15 Bahir Dar Institute of Technology
Flow of Control Optimization …

▪ Remove a Jump to next Instruction


Before After
goto L1 L1:…

L1:…
▪ Replace a jump around jump
Before After
if T = 0 goto L1 if T != 0 goto L2
L1:…
else goto L2
L1:…

Chapter – 6 : Code Optimization & Generation 16 Bahir Dar Institute of Technology


Algebraic simplification
▪ Peephole optimization is an effective technique for algebraic
simplification.
▪ The statements such as x = x + 0 or x := x* 1 can be eliminated by
peephole optimization (replacing by x itself; b/c x+0 and x*1 is = x itself).
▪ Other example, the expression a = a + 1 can simply be replaced by
INC a
Egg-3. Before Egg-3. After
void add (int i) void add (int i)
{ {
a[0] = i + 0; a[0] = i;
a[1] = i * 0; a[1] = 0;
a[2] = i - i; a[2] = 0;
a[3] = 1 + i + 1; a[3] = 2 + i;
} }

Chapter – 6 : Code Optimization & Generation 17 Bahir Dar Institute of Technology


Machine idioms
▪ The target machine may have hardware instructions to implement
certain specific operations efficiently.
▪ Detecting situations that permit the use of these instructions can
reduce execution time significantly.
▪ Hence we can replace these target instructions by equivalent
machine instructions in order to improve the efficiency.
▪ For example some machines have auto-increment or auto-
decrement addressing modes.
▪ These modes can be used in code for statement like:
▪ i=i+1; or
▪ x = x – 1; to execute in fast.

Chapter – 6 : Code Optimization & Generation 18 Bahir Dar Institute of Technology


Loops in Flow Graphs: Dominators
▪ In a flow graph, a node d dominates n if every path to node n from
initial node goes through d only.
▪ This can be denoted as ‘d dom n'.
▪ Every initial node dominates all the remaining nodes in the flow
graph. 1
▪ Every node dominates itself.
2

3 4

▪ Node 1 is initial node and it dominates every node as it is initial node.


▪ Node 2 dominates 3, 4 and 5.
Chapter – 6 : Code Optimization & Generation 19 Bahir Dar Institute of Technology
Loops in Flow Graphs: Natural Loops
▪ A natural loop is a set of nodes with a header node that dominates
all the nodes in the set and has at least one back edge entering
that node.
There are two essential properties of natural loop:
1. A loop must have single entry point, called the header. This
point dominates all nodes in the loop. Header 1
2. There must be at least one way to iterate loop.
i.e. at least one path back to the header. 2

6→1 is natural loop and 1 dom 6


3 4

Chapter – 6 : Code Optimization & Generation 20 Bahir Dar Institute of Technology


Loops in Flow Graphs: Inner Loops
1
▪ The inner loop is a loop one
that contains no other loops. 2
3
▪ Here the inner loop is 4→2 that 4
means edge given by 2-3-4.
5

▪ When two loops have the same header; it is hard to


tell which is the inner loop.
▪ Thus, we shall assume that when two natural loops
have the same header, and neither is properly
contained within the other, they are combined and
treated as a single loop.
▪ Egg. natural loops of a back edges 3 → 1 and 4 → 1 in
Fig 2 are {I,2,3} and {I,2,4}, respectively.
▪ We shall combine them into a single loop, {1,2,3,4). Fig 2: Two loops with
▪ so it would not be combined with the other natural same header
loops, but rather treated as an inner loop, nested within.
Chapter – 6 : Code Optimization & Generation 21 Bahir Dar Institute of Technology
Loops in Flow Graphs: Pre-Headers
▪ Several transformation require us to move statements “before
the header”.
▪ Therefore we begin treatment of a loop 𝐿 by creating a new
block, called the pre-header.
▪ The preheader has only the header as successor; and all edges
which formerly entered the header of L from outside L instead
enter the preheader.
▪ Edges from inside loop L to the header are not changed.

Preheader

Header Header

B0 B0

Before After
Chapter – 6 : Code Optimization & Generation 22 Bahir Dar Institute of Technology
Loops in Flow Graphs: Reducible Flow Graph
▪ The reducible graph is a flow graph in which there are two types of
edges forward edges and backward edges.
▪ Exclusive use of structured flow-of-control statements such as if-
then-else, while-do, continue, and break statements produces
programs whose flow graphs are always reducible.
▪ There are no jumps into the middle of loops from outside; the only entry to a
loop is through its header.
▪ These edges have following properties, 1
▪ The forward edges form an acyclic graph
(a graph without cycles) in which every node can 2
be reached from the initial node of graph G.
3 4

▪ The back edges consist only of edges whose 5


head dominates their tail.

Chapter – 6 : Code Optimization & Generation 23 Bahir Dar Institute of Technology


Loops in Flow Graphs: Nonreducible Flow Graph
▪ A non reducible flow graph is a flow graph in which:
1. There are no back edges.
2. Forward edges may produce cycle in the graph.

2 3

Chapter – 6 : Code Optimization & Generation 24 Bahir Dar Institute of Technology


Code Generation
▪ Code generation can be considered as the final phase of compilation.
▪ The code generator takes as input an intermediate representation of
the source program and maps it into the target language/machine.
▪ If the target language is machine code, then the registers or memory
locations are selected for each of the variables used by the program.
▪ The intermediate instructions are translated into sequences of machine
instructions.

Source Code Code Target


Front End
Program Optimizer Generator Program
IR IR

▪ Properties of Target code:


1. Correctness
2. High quality
3. Efficient use of resources of target code
4. Quick code generation
Chapter – 6 : Code Optimization & Generation 25 Bahir Dar Institute of Technology
Issues in code generator design: Input to code generator

▪ Input to the code generator consists of the intermediate


representation of the source program.
▪ The choice for the intermediate representation includes:
✓ Graphical representations such as syntax trees and DAG’s.
✓ Linear representations such as postfix notation, three address representation
(quadruples, triples, indirect triples).
✓ Virtual machine representations such as bytecodes and stack-machine code;

▪ The detection of semantic error should be done before submitting


the input to the code generator.
▪ The code generation phase requires complete error free
intermediate code as an input.

Chapter – 6 : Code Optimization & Generation 26 Bahir Dar Institute of Technology


Issues in code generator design: Target program
▪ The output may be in form of:
1. Absolute machine language: Absolute machine language
program can be placed in a memory location and immediately
execute. (i.e. executable code)
2. Relocatable machine language: The subroutine can be
compiled separately. A set of relocatable object modules can
be linked together and loaded for execution. (object files for
linker)
3. Assembly language: Producing an assembly language program
as output makes the process of code generation easier, then
assembler is require to convert code in binary form. (facilitates
debugging)
4. Byte code forms for interpreters (e.g. JVM)

Chapter – 6 : Code Optimization & Generation 27 Bahir Dar Institute of Technology


Issues in code generator design: Memory management

▪ Names in the source program are mapped to addresses of data


objects in run-time memory by both the front end and code
generator. i.e.
▪ Mapping names in the source program to addresses of data objects
in run time memory is done cooperatively by the front end and the
code generator.
▪ Memory Management uses symbol table to get information about
names that in a three-address statement.
▪ From the symbol table information, a relative address can be
determined for the name in a data area.
▪ The amount of memory required by declared identifiers are
calculated and storage space is reserved in memory at run time.

Chapter – 6 : Code Optimization & Generation 28 Bahir Dar Institute of Technology


Issues in code generator design: Instruction selection
▪ Instruction selection is the process of choosing target-language instructions for
each IR statement. It means that,
▪ The code generator must map the IR program into a code sequence that can be
executed by the target machine.
▪ depends on the instruction set of the target machine, Instruction speeds and
machine idioms
Example: the sequence of statements
a := b + c would be translated into
LD R0, b // R0 = b
d := a + e ADD R0, R0, c // R0 = R0 + c
ST a, R0 // a = R0
LD R0, a // R0 = a
ADD R0, R0, e // R0 = R0 + e
ST d, R0 // d = R0

▪ Here the fourth statement is redundantsince it loads a value that has just been
stored so we can eliminate that statement.

Chapter – 6 : Code Optimization & Generation 29 Bahir Dar Institute of Technology


Issues in code generator design: Register allocation
▪ decide what values to keep in which registers
▪ The use of registers is often subdivided into two sub problems:
• Register allocation and register assignment
▪ During register allocation, we select the set of variables that will reside
in registers at a point in the program.
▪ During a subsequent register assignment phase, we pick the specific
register that a variable will reside in.
▪ Finding an optimal assignment of registers to variables is difficult, even
with single register value.
▪ Mathematically the problem is NP-complete.

▪ NP complete problems are problems whose status is unknown.


▪ No polynomial time algorithm has yet been discovered for any NP complete
problem, nor has anybody yet been able to prove that no polynomial-time
algorithm exist for any of them.

Chapter – 6 : Code Optimization & Generation 30 Bahir Dar Institute of Technology


Issues in code generator design: Choice of evaluation

▪ The order in which computations are performed can


affect the efficiency of the target code.
▪ Some computation orders require fewer registers to hold
intermediate results than others.
▪ However, Picking a best order is another difficult, NP-
complete problem.
▪ We shall avoid the problem by generating code for the
three-address statements in the order in which they
have been produced by the intermediate code generator.

Chapter – 6 : Code Optimization & Generation 31 Bahir Dar Institute of Technology


Issues in code generator design: Choice of evaluation

• Example:
• When inst ruct ions are independent, t heir evaluat ion order can
be changed.
LD R0 , a
t1 := a + b ADD R0 ,b
a + b – (c + d) * e t2 := c + d ST t1 , R0
t3 := e * t2 LD R1 , c
t4 := t1 – t3 ADD LD R0 , c
LD R0R1 , e, d
MUL R0 , R1 ADD R0 , d
Reorder LD R1 , t1 LD R1, e
SUB R1 , R0 MUL R1, R0
ST t4 , R1 LD R0 , a
t2 := c + d ADD R0 , b
t3 := e * t2 SUB R0 , R1
t1 := a + b ST t4 , R0
t4 := t1 – t3
32

Chapter – 6 : Code Optimization & Generation 32 Bahir Dar Institute of Technology


A Simple Target Machine Model

• Implementing code generation requires complete understanding


of the target machine architecture and its instruction set.

• Our (hypothetical) machine:


– Byte-addressable (word = 4 bytes)
– Has n general purpose registers RO, R1, …, Rn-1
– All operands are integers
– Three-address instructions of the form op dest, src1, src2
• Assume the following kinds of instructions are available:
– Load operations
– Store operations
– Computation operations
– Unconditional jumps
– Conditional jumps 19

Chapter – 6 : Code Optimization & Generation 33 Bahir Dar Institute of Technology


The Target Machine: Addressing Modes

• We assume that our target machine has a variety of


addressing modes:
– In instructions, a location can be a variable name x referring
to the memory location that is reserved for x.
– Indexed address, a(r), where a is a variable and r is a register.

LD R1, a(R2) R l = contents (a + contents ( R2))


• This addressing mode is useful for accessing arrays.

– A memory location can be an integer indexed by a register, for


example,
LD R1, 100(R2) R1 = contents(100 + contents(R2))
• useful for following pointers 25

Chapter – 6 : Code Optimization & Generation 34 Bahir Dar Institute of Technology


The Target Machine: Addressing Modes
– Two indirect addressing modes: *r and *100(r)
LD R1, *100 (R2) R1 = contents(contents(l00 + contents(R2)))
• Loading into R1 the value in the memory location stored in the
memory location obtained by adding 100 to the contents of register
R2.

• Immediate constant addressing mode.


The constant is prefixed by # symbol.
The instruction LD R1, #100 loads the integer 100 into register
R1, and ADD R1, R1, #100 adds the integer 100 into register R1.
i.e. R1 = R1 + 100
• Comments at the end of instructions are preceded by //.

Chapter – 6 : Code Optimization & Generation 35 Bahir Dar Institute of Technology


The Target Machine: Addressing Modes

• Op-codes (op), for example


LD and ST (move content of source to destination)
ADD (add content of source to destination)
SUB (subtract content of source from dest.)

Address modes
Mode Form Address Added Cost
Absolute M M 1
Register R R 0
Indexed a(R) a + contents (R) 1
Indirect Register *R contents (R) 0
Indirect Indexed *a(R) contents(a + contents (R)) 1
Literal #c c 1

Chapter – 6 : Code Optimization & Generation 36 Bahir Dar Institute of Technology


A Simple Target language (assembly language)
• Example :
x = y – z → LD R1, y //R1=y
LD R2, z // R2=z
SUB R1, R1, R2 //R1=R1-R2
ST x, R1 //x=R1
Suppose a is an array whose elements are 8-byte values, perhaps real numbers
and elements of a are indexed starting at 0
b = a[i] → LD R1,I //R1=i
MUL R1, R1,8 // R1=R1*8
LD R2, a(R1) // R2=content(a+ content(R1))
ST b, R2 // b=R2
ST a(R2), R1 // content(a+ content(R2))=R1
That is, the second step computes 8i, and the third step places in register R2
the value in the ith element of a

Similarly, the assignment into the array a represented by three-address


instruction a[j] = c is implemented by:

a[j] = c → LD R1, c //R1=c


LD R2, j //R2=j
MUL R2, R2, 8 //R2=R2*8

Chapter – 6 : Code Optimization & Generation 37 Bahir Dar Institute of Technology


A Simple Target language (assembly language)
To implement a simple pointer indirection, such as the three-address
statement x = *p, we can use machine instructions like:

x = *p → LD R1, p //R1=p
LD R2, 0(R1) //R2=content(0+content(R1))
ST x, R2 //x=R2
The assignment through a pointer *p = y is similarly implemented in machine
code by:
*p = y → LD R1, p //R1=p
LD R2, y //R2=y
ST 0(R1), R2 //content(0+content(R1))=R2
Finally, consider a conditional-jump three-address instruction like: if x < y goto L.
and implemented as:
if x < y goto L → LD R1, x //R1=x
LD R2, y //R2=y
SUB R1, R1, R2 //R1=R1-R2
BLTZ R1, L //if R1 < 0 jump to L

Chapter – 6 : Code Optimization & Generation 38 Bahir Dar Institute of Technology


DAG Representation of Basic Block:
▪ Directed acyclic graphs (DAGs) are useful data structures for implementing
transformations on basic blocks.
▪ A DAG for a basic block (or just DAG) is a directed acyclic graph
with the following labels on nodes:
1. Leaves are labeled by unique identifiers. either variable names or
constants
• The leaves represent initial values of names. and we subscript
them with 0 to avoid confusion with labels denoting "current"
values of names
2. Interior nodes are labeled by an operator symbol
3. Nodes are also optionally given a sequence of identifiers for
labels.
• interior nodes represent computed values, and the identifiers
labeling a node are deemed to have that value

Chapter – 6 : Code Optimization & Generation 39 Bahir Dar Institute of Technology


DAG Representation of Basic Block:
▪ Directed acyclic graphs (DAGs) are useful data structures for implementing
transformations on basic blocks.
▪ Algorithm: DAG Construction
▪ We assume the three address statement could of following types:
Case (i) x:=y op z
Case (ii) x:=op y
Case (iii) x:=y
With the help of following steps the DAG can be constructed.
▪ Step 1: If y is undefined then create node(y). Similarly if z is undefined create a
node (z)
▪ Step 2:
Case(i) create a node(op) whose left child is node(y) and node(z) will be
the right child. Also check for any common sub expressions.
Case (ii) determine whether is a node labeled op, such node will have a
child node(y).
Case (iii) node n will be node(y).
▪ Step 3: Delete x from list of identifiers for node(x). Append x to the list of
attached identifiers for node n found in 2.

Chapter – 6 : Code Optimization & Generation 40 Bahir Dar Institute of Technology


DAG Representation of Basic Block:
▪ Example: A DAG for the block

Fig.
When we construct the node for a third statement c = b + c, we know that the use of
b in b + c refers to the node of Fig. labeled -; because that is the most recent
definition of b. Thus, we do not confuse the values computed at statements one and
three.
However, the node corresponding to the fourth statement d = a- d has the operator -
and the nodes with attached variables a and d0 as children. Since
the operator and the children are the same as those for the node corresponding to
statement two, we do not create this node, but add d to the list of definitions for the
node labeled -.
▪ But by avoiding common expressions we get the block

Chapter – 6 : Code Optimization & Generation 41 Bahir Dar Institute of Technology


DAG Representation of Basic Block

Example 2:
(1) t1 := 4*i
(2) t2 := a [t1]
(3) t3 := 4*i t6 , prod
+
(4) t4 :=b [t3]
(5) t5 := t2*t4
prod ∗ t5
(6) t6 := prod +t5
(7) prod := t6 t4
t ≤ (1)
(8) t7 := i+1 [] 2 []
(9) i := t7 t1 ,t3
(10) if i<=20 goto (1) ∗ + t7 , i 20

a b
4 i 1

Chapter – 6 : Code Optimization & Generation 42 Bahir Dar Institute of Technology


Applications of DAG

▪ The DAGs are used in following:


1. Determining the common sub-expressions.
2. Determining which names are used inside the block and
computed outside the block.
3. Determining which statements of the block could have their
computed value outside the block.
4. Simplifying the list of quadruples by eliminating the common
sub-expressions and not performing the assignment of the
form x:=y unless and until it is a must.

Chapter – 6 : Code Optimization & Generation 43 Bahir Dar Institute of Technology


Chapter – 6 : Code Optimization & Generation 44 Bahir Dar Institute of Technology

You might also like