0% found this document useful (0 votes)
9 views81 pages

CD Unit 5 1

The document outlines Unit 5 of a Compiler Design course, focusing on code optimization techniques including function preserving transformations, loop optimization, and data flow analysis. It discusses various optimization methods such as common subexpression elimination, dead-code elimination, and constant folding, as well as the use of Directed Acyclic Graphs (DAGs) for optimization. Additionally, it covers the principles of control-flow and data-flow analysis, emphasizing the importance of preserving program semantics while improving performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views81 pages

CD Unit 5 1

The document outlines Unit 5 of a Compiler Design course, focusing on code optimization techniques including function preserving transformations, loop optimization, and data flow analysis. It discusses various optimization methods such as common subexpression elimination, dead-code elimination, and constant folding, as well as the use of Directed Acyclic Graphs (DAGs) for optimization. Additionally, it covers the principles of control-flow and data-flow analysis, emphasizing the importance of preserving program semantics while improving performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

21CSC304J

COMPILER DESIGN
UNIT 5 – Code Optimization
Prepared by

Dr L Josephine Usha
AP/CSE
SRMIST
Unit V - Outline

• Code optimization -Principal Sources of Optimization- Function


Preserving Transformation- Loop Optimization- Peephole optimization —
DAG- Basic Blocks- Flow Graphs- Global Data Flow Analysis — Efficient
Data Flow Algorithm- Runtime Environments- Source Language issues-
Storage Organization- Activation Records- Storage Allocation strategies.

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Contents
• Principal Sources of Optimization
• DAG
• Optimization of Basic Blocks
• Global Data Flow Analysis
• Efficient Dataflow algorithms
• Issues in a Design of a Code Generator
• A Simple Code Generator Algorithm

5/3/2025 CS6660-Compiler Design 3


5/3/2025 CS6660-Compiler Design 4
Principles of Code Optimization

 Code Optimization
 Basic Blocks and Flow Graphs
 Sources of Optimization
• 1. Common Sub Expression Elimination
• 2. Copy Propagation
• 3. Dead-Code Elimination
• 4. Constant Folding
• 5. Loop Optimization

5/3/2025 CS6660-Compiler Design 5


Organization for an Optimizing Compiler

5/3/2025 CS6660-Compiler Design 6


Code Optimization: Introduction
 Intermediate Code undergoes various transformations—called Optimizations —to
make the resulting code running faster and taking less space.
 Consider only Machine-Independent Optimizations —don’t take into consideration
any properties of the target machine.
 The Techniques used are a combination of Control-Flow and Data-Flow analysis.
 Control-Flow Analysis - Identifies loops in the flow graph of a program since
such loops are usually good candidates for improvement.
 Data-Flow Analysis - Collects information about the way variables are used in a
program

5/3/2025 CS6660-Compiler Design 7


Criteria for Code-Improving Transformations
 The best transformations are those that yield the most benefit for the least
effort.
 A transformation must preserve the meaning of a program.
 A transformation must, on the average, speed up a program by a
measurable amount.
 Dramatic improvements are usually obtained by improving the source
code: The programmer is always responsible in finding the best possible
data structures and algorithms for solving a problem.

5/3/2025 CS6660-Compiler Design 8


Basic Blocks and Flow Graphs

 The Machine-Independent Code-Optimization phase consists of control-


flow and data-flow analysis followed by the application of transformations.
 During Control-Flow analysis, a program is represented as a Flow Graph,
where:
 Nodes represent Basic Blocks: Sequence of consecutive statements in
which flow-of-control enters at the beginning and leaves at the end
without halt or branches;
 Edges represent the flow of control.

5/3/2025 CS6660-Compiler Design 9


Partitioning three-address instructions into
Basic Block
• Identify all the leaders in the program.
• For each leader: include in its basic block all the instructions from
the leader to the next leader (next leader not included) or the
end of the routine, in sequence.
• Leaders
• The first three-address instruction in the intermediate code is a leader.
• Any instruction that is the target of a conditional or unconditional jump
is a leader
• Any instruction that immediately follows a conditional or unconditional
jump is a leader.

5/3/2025 CS6660-Compiler Design 10


Flow Graph: An Example

 A flow graph is a graphical depiction of a sequence of instructions with


control flow edges

 Flow graph for the three-address code fragment for quicksort. Each
Bi basic block.

5/3/2025 CS6660-Compiler Design 11


Quick sort: An Example Program
void quicksort(m,n)
{
int m,n;
int i,j,v,x;
if (n <= m) return;
i = m-1; j = n; v = a[n];
/* fragment begins here */
while (1) {
do i = i+1; while (a[i]<v);
do j = j-1; while (a[j]>v);
if (i>=j) break;
x = a[i]; a[i] = a[j]; a[j] =x;
x = a[i]; a[i] = a[n]; a[n] =x; }
/* fragment ends here */
quicksort(m,j); quicksort(i+1,n);
}5/3/2025 CS6660-Compiler Design 12
5/3/2025 CS6660-Compiler Design 13
The Principal Sources of Optimization

 Local transformations—involving only statements in a single basic block


 global transformations – involving statements in and around other basic block.
 A basic block computes a set of expressions: A number of transformations can be
applied to a basic block without changing the expressions computed by the block.
1. Common Sub expressions elimination
2. Copy Propagation
3. Dead-Code elimination
4. Constant Folding.

5/3/2025 CS6660-Compiler Design 14


Common Subexpression
• Common sub expression
• Previously computed
• The values of the variables not changed
• Local

5/3/2025 CS6660-Compiler Design 15


Global
• Global

5/3/2025 CS6660-Compiler Design 16


• After

5/3/2025 CS6660-Compiler Design 17


DAGs for Determining Common
Subexpressions
• To identify common sub expressions we represent a basic block as a DAG
showing how expressions are re-used in a block.
• DAG
• A DAG for a Basic Block has the following labels and nodes:
• 1. Leaves contain unique identifiers, either variable names or constants.
• 2. Interior nodes contain an operator symbol.
• 3. Nodes can optionally be associated to a list of variables representing those variables having
the value computed at the node.

5/3/2025 CS6660-Compiler Design 18


DAGs for Blocks: An Example

• The following shows both a three-address code of a basic block


and its associated DAG.

5/3/2025 CS6660-Compiler Design 19


Copy Propagation

• Copy Propagation Rule: Given the copy statement x := y use y for


x whenever possible after the copy statement.
• Copy Propagation applied to Block B5 yields:

Before After

5/3/2025 CS6660-Compiler Design 20


Copy Statement
• Copy statements or Copies

5/3/2025 CS6660-Compiler Design 21


Dead-Code Elimination

• A variable is live at a point in a program if its value can be used


subsequently, otherwise it is dead.
• A piece of code is dead if data computed is never used elsewhere.
• Dead-Code may appear as the result of previous transformation.
• Dead-Code works well together with Copy Propagation.
• Example. Considering the Block B5 after Copy Propagation we can
see that
• x is never reused all over the code. Thus, x is a dead variable and we
can eliminate the assignment x=t3 from B5

5/3/2025 CS6660-Compiler Design 22


Dead code
elimination

5/3/2025 CS6660-Compiler Design 23


Constant Folding
• Based on deducing at compile-time that the value of an expression
(and in particular of a variable) is a constant.
• Constant Folding is the transformation that substitutes an
expression with a constant.
• Constant Folding is useful to discover Dead-Code.

5/3/2025 CS6660-Compiler Design 24


Loop Optimization
• The running time of a program can be improved if we decrease the
amount of instructions in an inner loop.
• Three techniques are useful:
• Code Motion
• Reduction in Strength
• Induction-Variable elimination

5/3/2025 CS6660-Compiler Design 25


Code Motion

• If the computation of an expression is loop-invariant this transformation


places such computation before the loop.
• Loop-invariant computation - An expression that yields the same result independent
of the number of times a loop is executed
• Example. Consider the following while statement:
while (i <= limit - 2) do
• The expression limit - 2 is loop invariant. Code motion transformation
will result in:
t := limit -2;
while (i <=t) do
5/3/2025 CS6660-Compiler Design 26
Reduction in Strength
• The transformation of replacing an expensive operation, such as multiplication,
by a cheaper one, such as addition.

• Result. The substitution of a multiplication by a subtraction will speed up the


resulting code.

5/3/2025 CS6660-Compiler Design 27


Induction Variables
• A variable x is an Induction Variable of a loop if every time the
variable x changes values, it is incremented or decremented by some
constant.
• Example
• Consider the loop of Block B3. The variables j and t4 are Induction
Variables. The same applies for variables i and t2 in Block B2.
• After Reduction in Strength is applied to both t2 and t4, the only use
of i and j is to determine the test in B4.
• Since t2 := 4*i And t4 := 4*j, the test t2>t4 is equivalent to i>j
• After this replacement in the test, both i (in Block B2) and j (in Block
B3) become dead-variables and can be eliminated.

5/3/2025 CS6660-Compiler Design 28


Flow Graph after
Reduction in Strength
and Induction-Variables
elimination

5/3/2025 CS6660-Compiler Design 29


CONTENTS
• Principal Sources of Optimization
• DAG
• Optimization of Basic Blocks
• Global Data Flow Analysis
• Efficient Dataflow algorithms
• Issues in a Design of a Code Generator
• A Simple Code Generator Algorithm

5/3/2025 CS6660-Compiler Design 30


Optimization of Basic Blocks
• Number of code-improving transformations for basic blocks called
structure-preserving transformations.
• They are,
• Common Subexpression Elimination
• Dead-code elimination
• Algebraic transformations such as reduction in strength
• These transformations are implemented based on constructing a
DAG.

5/3/2025 CS6660-Compiler Design 31


DAG-Example
• Leaves corresponding to atomic operands, and interior nodes
corresponding to operators.
• Example: (a + a*(b - c)) + ((b - c)*d)

5/3/2025 CS6660-Compiler Design 32


Example
• A dag for the block

If either b or d is not live on exit from the block, we do not need to


compute the variable.

5/3/2025 CS6660-Compiler Design 33


Example – Contd.
• If b is not live on exit, we could use,

5/3/2025 CS6660-Compiler Design 34


Dead code Elimination

• Delete any root from DAG that has no ancestors and is not live out (has
no live out variable associated).
• Repeat previous step till no change.

• Assume a and b are live out.


• Remove first e and then c.
• a and b remain
5/3/2025 CS6660-Compiler Design 35
CSE - Algebraic Identities
•x+0=0+x=x
•x*1=1*x=x
• a && true = true && a = a
• a || false = false || a = a
•x*0=0*x=0
•0/x=0
• x- 0 = x

5/3/2025 CS6660-Compiler Design 36


CSE- Reduction in Strength
• Replacing more expensive operator by a cheaper one.
• x ** 2 = x*x
• 2.0 *x = x+x
• x/2 = x *0.5

5/3/2025 CS6660-Compiler Design 37


CSE - Constant Folding
• Evaluate the constant expression at compile time and replace the
constant expressions by their values.
• The expression 2*3.14  6.28

5/3/2025 CS6660-Compiler Design 38


CSE- Others
• Commutativity and Associativity
• * is commutative (i.e) x*y = y*x
• Relational Operators
• <=, >=, <, >, = and ≠ - generate common subexpressions
• x>y is equivalent to testing x-y
• Associative Laws
• Expose common subexpressions
• Example:
a := b+c a := b+c
e := c+d+b t := c+d
e = t+b
• If t is not needed outside the block, we can change the code as,
a : =b+c
e = a+d based on associativity and commutativity of +

5/3/2025 CS6660-Compiler Design 39


CONTENTS
• Principal Sources of Optimization
• DAG
• Optimization of Basic Blocks
• Global Data Flow Analysis
• Efficient Dataflow algorithms
• Issues in a Design of a Code Generator
• A Simple Code Generator Algorithm

5/3/2025 CS6660-Compiler Design 40


Global Data Flow Analysis

• Techniques that derive information about the flow of data along program
execution paths
• Examples
• One way to implement global common sub expression elimination requires us to
determine whether two identical expressions evaluate to the same value along any
possible execution path of the program.
• If the result of an assignment is not used along any subsequent execution path, then
we can eliminate the assignment as dead code.

5/3/2025 CS6660-Compiler Design 41


Data Flow
• Execution paths
• Within one basic block, the program point between two adjacent statements, as well
as the point before the first statement and the last.
• If there is an edge from block B1 to block B2 , then the program point after the last
statement of B1 may be followed immediately by the program point before the first
statement of B2.
• Define an execution path from point P1 to point Pn to be a sequence of
points P1 , P2 , . . . , Pn such that for each i = 1 , 2, . . . , n - 1, either
1. Pi is the point immediately preceding a statement and Pi+1 is the point immediately
following that same statement, or
2. Pi is the end of some block and Pi+1 is the beginning of a successor block.

5/3/2025 CS6660-Compiler Design 42


Reaching definition
• Reaching definition
• The definitions that may reach a program point along some path
• Determine which definitions of a variable may reach each use of the variable.
• For each use, list the definitions that reach it.
• In global data flow analysis, we collect such information at the endpoints of a basic block, but we
can do additional local analysis within each block.
• A definition D reaches a point p if there is a path from D to p along which D is not
killed.
• A definition D of a variable x is killed when there is a redefinition of x.

5/3/2025 CS6660-Compiler Design 43


Data-Flow Analysis Schema
• A data-flow value for a program point represents an abstraction of the set of
all possible program states that can be observed for that point
• A particular data-flow value is a set of definitions
• IN[s] and OUT[s]: data-flow values before and after each statement s
• The data-flow problem is to find a solution to a set of constraints on IN [s ]
and OUT [ s ] , for all statements s

5/3/2025 CS6660-Compiler Design 44


Data-Flow Analysis Schema

• Two kinds of constraints


• Those based on the semantics of statements (transfer functions)
• Those based on flow of control
• A DFA schema consists of
• A control-flow graph
• A direction of data-flow (forward or backward)
• A set of data-flow values
• A confluence operator (normally set union or intersection)
• Transfer functions for each block
• We always compute safe estimates of data-flow values
• A decision or estimate is safe or conservative, if it never leads to a change in what the program
computes
5/3/2025 CS6660-Compiler Design 45
Data-Flow Analysis Schema

• transfer function
• Both a and b will have the same value after the b=a statement.
• Transfer function of a statement s is denoted as fs
• Two flavors of transfer function
• Information propagate forward along execution paths

• Flow backwards up the execution paths

5/3/2025 CS6660-Compiler Design 46


The Reaching Definitions Problem
• A definition d reaches a point p, if there is a path from the point immediately
following d to p, such that d is not killed along that path
• We kill a definition of a variable a, if between two points along the path,
there is an assignment to a
• Unambiguous and ambiguous definitions of a variable
a := b+c (unambiguous definition of ’a’)
…*p := d (ambiguous definition of ’a’, if ’p’ may
point to variables other than ’a’ as well;
hence does not kill the above definition of ’a’)
... a := k-m (unambiguous definition of ’a’; kills the above
definition of ’a’)
5/3/2025 CS6660-Compiler Design 47
Iterative Algorithm for Reaching Definitions
Algorithm : Reaching definitions.
INPUT: A flow graph for which kill[B] and gen[B] have been computed for each block B.
OUTPUT: IN[B ] and OUT[B]
METHOD:

5/3/2025 CS6660-Compiler Design 48


Available expressions
• gen[B] = {expressions evaluated in B without subsequently redefining its
operands}
• kill[B] = {expressions whose operands are redefined in B without reevaluating
the expression afterwards}
• out[B] = gen[B]  (in[B] - kill[B])

5/3/2025 CS6660-Compiler Design 49


Example

5/3/2025 CS6660-Compiler Design 50


Live-Variable Analysis
• In live-variable analysis we wish to know for variable x and point p
whether the value of x at p could be used along some path in the flow
graph starting at p. If so, we say x is live at p; otherwise, x is dead at p.

• Definitions:
1. defB: the set of variables defined in B prior to any use of that variable in B
2. useB: the set of variables whose values may be used in B prior to any definition of
the variable.

5/3/2025 CS6660-Compiler Design 51


Live-Variable Analysis
• Algorithm: Live-variable analysis.
• INPUT: A flow graph with def and use computed for each block.
• OUTPUT: IN[B] and OUT[B].
• METHOD:

5/3/2025 CS6660-Compiler Design 52


Available Expressions
• Let
IN[B] be the set of expressions that are available before B
OUT[B] be the same for the point following the end of B
e_genB be the expressions generated by B
e_killB be the set of expressions killed in B
• Then

• For all basic blocks B other than ENTRY

5/3/2025 CS6660-Compiler Design 53


CONTENTS
• Principal Sources of Optimization
• DAG
• Optimization of Basic Blocks
• Global Data Flow Analysis
• Efficient Dataflow algorithms
• Issues in a Design of a Code Generator
• A Simple Code Generator Algorithm

5/3/2025 CS6660-Compiler Design 54


CODE GENERATION
• Final phase in the design of the compiler.
• A code generator generates target code for a sequence of three-
address statements.

5/3/2025 CS6660-Compiler Design 55


Issues in the Design of Code Generator
• Input to the Code Generator
• Target Machines
• Memory Management
• Instruction Selection
• Register Allocation
• Evaluation Order

5/3/2025 CS6660-Compiler Design 56


Input to the code generator

• Input
• Intermediate representation of source program
• Symbol Table – runtime addresses
• Intermediate Representation
• Linear rep – postfix notation
• Three address rep- quadruples
• Virtual rep- stack machine code
• Graphical rep – syntax trees and dags
• Prior to code generation
• Perform type checking
• Values can assigned to names
• Input should be free of errors.
5/3/2025 CS6660-Compiler Design 57
Target Programs

• Output
• Absolute machine language
• Output placed in fixed location in m/y, example PL/C
• Fast for small programs
• Relocatable machine language
• Output allows subprograms to be compiled separately
• Needs linker and loader
• Assembly code
• Use symbolic instructions and macro to generate code
• Needs assembler, linker and loader

5/3/2025 CS6660-Compiler Design 58


Memory Management

• Mapping names in the source program to addresses of data objects in


run-time memory is done cooperatively by the front end and the code
generator.
• A name in a three- address statement refers to a symbol-table entry for
the name.
• From the symbol-table information, a relative address can be
determined for the name in a data area for the procedure
• Backpatching – technique in which labels in three- address code is
converted to address of instructions.

5/3/2025 CS6660-Compiler Design 59


Instruction Selection

• Factors to be consider for Instruction Selection


• should be Uniform and Complete
• Instruction Speeds and Machine Idioms – Efficiency of the target program
• Instruction with Register operands - faster
• Example
• The code for a= a+1
MOV a, R0
ADD #1, R0  Replaced by one single instruction INC a
MOV Ro, a
• The quality of code  speed and size.

5/3/2025 CS6660-Compiler Design 60


Register Allocation
• Instruction involving register operands are – shorter and faster than
instruction involving memory operands.
• Efficient utilization of registers are important for code generation.

5/3/2025 CS6660-Compiler Design 61


Choice of Evaluation Order
• Order in which the computations are performed – affect the
efficiency of the target code.
• Some computation orders require fewer registers to hold
intermediate results than others.
• NP- complete problem
• Generate the code for three-address code in the order in which they
have been produced by the intermediate code generator.

5/3/2025 CS6660-Compiler Design 62


CONTENTS
• Principal Sources of Optimization
• DAG
• Optimization of Basic Blocks
• Global Data Flow Analysis
• Efficient Dataflow algorithms
• Issues in a Design of a Code Generator
• A Simple Code Generator Algorithm

5/3/2025 CS6660-Compiler Design 63


Simple Code Generator

• A code generator generates target code for a sequence of three-


address statements,
• Use register to store operands of the statement.
• Two descriptors are used:
• Register descriptor
• Keep track of what is currently in each register.
• Initially all the registers are empty
• Address descriptor
• Keep track of location where current value of the name can be found at runtime
• The location might be a register, stack, memory address or a set of those

5/3/2025 CS6660-Compiler Design 64


Code Generation Algorithm
• Input:
• Sequence of three-address statements constituting a basic block.
• for each X = Y op Z do
• invoke a function getreg to determine location L where X must be stored. Usually L is a register.
• Consult address descriptor of Y to determine Y'. Prefer a register for Y'. If value of Y not already in L
generate
Mov Y', L
• Generate
op Z', L
Again prefer a register for Z. Update address descriptor of X to indicate X is in L. If L is a register update
its descriptor to indicate that it contains X and remove X from all other register descriptors.
• If current value of Y and/or Z have no next use and are dead on exit from block and are in registers,
change register descriptor to indicate that they no longer contain Y and/or Z.

5/3/2025 CS6660-Compiler Design 65


Code Generation Algorithm

• For the instruction x = y it generates code as follows:


• It calls a function getReg(x = y) to select a register Ry for both x and y.
We assume getReg will always choose the same register for both x and y.
• If y is not in Ry, issue the load instruction LD Ry, My where My is one of
the memory locations for y in the address descriptor.
• If y is already in Ry, we issue no instruction.
• The register and address descriptors are updated appropriately as each
machine instruction is issued.

5/3/2025 CS6660-Compiler Design 66


Function getreg

1. If Y is in register (that holds no other values) and Y is not live


and has no next use after
X = Y op Z
then return register of Y for L.
2. Failing (1) return an empty register
3. Failing (2) if X has a next use in the block or op requires register
then get a register R, store its content into M (by Mov R, M) and
use it.
4. else select memory location X as L

5/3/2025 CS6660-Compiler Design 67


Example
Stmt code reg desc addr desc
t1=a-b mov a,R0 R0 contains t1 t1 in R0
sub b,R0
t2=a-c mov a,R1 R0 contains t1 t1 in R0
sub c,R1 R1 contains t2 t2 in R1
t3=t1+t2 add R1,R0 R0 contains t3 t3 in R0
R1 contains t2 t2 in R1
d=t3+t2 add R1,R0 R0 contains d d in R0
mov R0,d d in R0 and
memory
5/3/2025 CS6660-Compiler Design 68
Conditional Statements

• Branch if value of a designated register (R) meets one of six conditions


negative, zero, positive, non-negative, non-zero, non-positive.

if X < Y goto Z MOV X, R0


SUB Y, R0 // Subtracting Y form X
CJ < Z // Jump to z if the
condition code is –ive or zero

• Condition codes: indicate whether last quantity computed or loaded into a


location is negative, zero, or positive

5/3/2025 CS6660-Compiler Design 69


Conditional Statements
• Compare instruction: sets the codes without actually computing the value
• CMP X, Y sets condition codes to positive if X > Y and so on
if X < Y goto Z CMP X, Y
CJL Z
• maintain a condition code descriptor: tells the name that last set the
condition codes
X =Y + Z Mov Y,R0
if X < 0 goto L Add Z, R0
Mov R0, X
CJN L

5/3/2025 CS6660-Compiler Design 70


Introduction
• The front end translates a source program into an intermediate
representation from which the back end generates target code.

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Intermediate code
• The following are commonly used intermediate code representation :
• Syntax tree

• Postfix Notation

• Three-Address Code

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Syntax tree
• A syntax tree depicts the natural hierarchical structure of a source
program.
• A dag (Directed Acyclic Graph) gives the same information but in a more
compact way because common sub expressions are identified. A syntax
tree and dag for the assignment statement a : = b * - c + b * - c are as
follows:

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Syntax tree

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Postfix notation
• Postfix notation is a linearized representation of a syntax tree; it is a list of
the nodes of the tree in which a node appears immediately after its
children.
• The postfix notation for the syntax tree given above is
a b c uminus * b c uminus * + assign

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Three-Address Code
• A statement involving no more than three references(two for
operands and one for result) is known as three address
statement.
• A sequence of three address statements is known as three
address code.
• Three address statement is of the form x = y op z , here x, y, z
will have address (memory location).
• Sometimes a statement might contain less than three references
but it is still called three address statement.

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Three-Address Code
• For Example :a = b + c * d;
• The intermediate code generator will try to divide this
expression into sub-expressions and then generate the
corresponding code.
r1 = c * d;
r2 = b + r1;
a = r2

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Three-Address Code
• A three-address code has at most three address locations to
calculate the expression. A three-address code can be
represented in three forms :
• Quadruples
• Triples
• Indirect Triples

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Quadruples
• Each instruction in quadruples presentation is divided into four
fields: operator, arg1, arg2, and result. The example is
represented below in quadruples format:

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Triples
• Each instruction in triples presentation has three fields : op,
arg1, and arg2. The results of respective sub-expressions are
denoted by the position of expression.

Prepared By:Dr.L.Josephine Usha, AP/SRMIST


Indirect Triples
• This representation is an enhancement over triples
representation.
• It uses pointers instead of position to store results.
• This enables the optimizers to freely re-position the sub-
expression to produce an optimized code.

Prepared By:Dr.L.Josephine Usha, AP/SRMIST

You might also like