0% found this document useful (0 votes)
2 views30 pages

1.unit 5 - Compiler Design (PH)

Code optimization is a crucial phase in compiler design aimed at enhancing program efficiency by reducing memory usage and CPU time. It involves various techniques categorized into machine-independent and machine-dependent optimizations, including dead code elimination, common subexpression elimination, and loop optimization strategies. The document also discusses the importance of basic blocks and flow graphs in the optimization process.

Uploaded by

I Know
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views30 pages

1.unit 5 - Compiler Design (PH)

Code optimization is a crucial phase in compiler design aimed at enhancing program efficiency by reducing memory usage and CPU time. It involves various techniques categorized into machine-independent and machine-dependent optimizations, including dead code elimination, common subexpression elimination, and loop optimization strategies. The document also discusses the importance of basic blocks and flow graphs in the optimization process.

Uploaded by

I Know
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Compiler Design

Unit –V

Code Optimization:

Code optimization is a program modification strategy that endeavors to enhance the intermediate code, so a
program utilizes the least potential memory, minimizes its CPU time and offers high speed.

Reasons for Optimizing the Code

●​ Code optimization is essential to enhance the execution and efficiency of a source code.
●​ It is mandatory to deliver efficient target code by lowering the number of instructions in a program.

When to Optimize?

Code optimization is an important step that is usually performed at the last stage of development.

Role of Code Optimization:

●​ It is the fifth stage of a compiler, and it allows you to choose whether or not to optimize your code,
making it really optional.
●​ It aids in reducing the storage space and increases compilation speed.
●​ It takes source code as input and attempts to produce optimal code.
●​ Functioning the optimization is tedious; it is preferable to employ a code optimizer to accomplish the
assignment.

Different Types of Optimization

Optimization is classified broadly into two types:

●​ Machine-Independent
●​ Machine-Dependent

Machine-Independent Optimization

It positively affects the efficiency of intermediate code by transforming a part of code that does not employ
hardware parts. It usually optimizes code by eliminating tediums and removing unneeded code.
do

item = 10;

amount= amount + item;

} while(amount<100);

This code implicates the replicated assignment of the identifier item if we put it this way:

item = 10;

do

amount= amount + item;

} while(amount<100);

Machine-Independent Optimization Techniques:

●​ Compile Time Evaluation


●​ Common Subexpression Elimination
●​ Variable Propagation
●​ Dead Code Elimination
●​ Code Movement
●​ Strength Reduction

Machine-Dependent Optimization

After the target code has been constructed and transformed according to the target machine architecture,
machine-dependent optimization is performed. It makes use of CPU registers and may utilize absolute rather
than relative memory addresses. Machine-dependent optimizers work hard to maximize the perks of the memory
hierarchy.

Optimization of Basic Blocks:

Optimization is applied to the basic blocks after the intermediate code generation phase of the compiler.
Optimization is the process of transforming a program that improves the code by consuming fewer resources
and delivering high speed. In optimization, high-level codes are replaced by their equivalent efficient low-level
codes. Optimization of basic blocks can be machine-dependent or machine-independent. These transformations
are useful for improving the quality of code that will be ultimately generated from basic block.

There are two types of basic block optimizations:


1. ​ Structure preserving transformations
2. ​ Algebraic transformations

Structure-Preserving Transformations:

The structure-preserving transformation on basic blocks includes:

1. ​ Dead Code Elimination


2. ​ Common Subexpression Elimination
3. ​ Renaming of Temporary variables
4. ​ Interchange of two independent adjacent statements

1. Dead Code Elimination:

Dead code is defined as that part of the code that never executes during the program execution. So, for
optimization, such code or dead code is eliminated. The code which is never executed during the program (Dead
code) takes time so, for optimization and speed, it is eliminated from the code. Eliminating the dead code
increases the speed of the program as the compiler does not have to translate the dead code.

Example:
// Program with Dead code

int main()

​ x=2

if (x > 2)

cout << "code"; // Dead code

​ else

cout << "Optimization";

​ return 0;
}

// Optimized Program without dead code

int main()

​ x = 2;

​ cout << "Optimization"; // Dead Code Eliminated

​ return 0;

2. Common Subexpression Elimination:

In this technique, the sub-expression which is commonly used frequently is calculated only once and reused
when needed. DAG ( Directed Acyclic Graph ) is used to eliminate common subexpressions.

Example:

3. Renaming of Temporary Variables:

Statements containing instances of a temporary variable can be changed to instances of a new temporary
variable without changing the basic block value.

Example: Statement t = a + b can be changed to x = a + b where t is a temporary variable and x is a new


temporary variable without changing the value of the basic block.
4.Interchange of Two Independent Adjacent Statements:
If a block has two adjacent statements which are independent can be interchanged without affecting the basic
block value.

Example:
t1 = a + b

t2 = c + d

These two independent statements of a block can be interchanged without affecting the value of the block.

Algebraic Transformation:

Countless algebraic transformations can be used to change the set of expressions computed by a basic block
into an algebraically equivalent set. Some of the algebraic transformation on basic blocks includes:

1. ​ Constant Folding
2. ​ Copy Propagation
3. ​ Strength Reduction

1. Constant Folding:
Solve the constant terms which are continuous so that compiler does not need to solve this expression.

Example:
x = 2 * 3 + y ⇒ x = 6 + y (Optimized code)

2. Copy Propagation:
It is of two types, Variable Propagation, and Constant Propagation.

Variable Propagation:
x=y ⇒ z = y + 2 (Optimized code)
z=x+2

Constant Propagation:
x=3 ⇒ z = 3 + a (Optimized code)
z=x+a
3. Strength Reduction:
Replace expensive statement/ instruction with cheaper ones.

x = 2 * y (costly) ⇒ x = y + y (cheaper)
x = 2 * y (costly) ⇒ x = y << 1 (cheaper)

Loop Optimization:

Loop optimization includes the following strategies:

1. ​ Code motion & Frequency Reduction


2. ​ Induction variable elimination
3. ​ Loop merging/combining
4. ​ Loop Unrolling

1. Code Motion & Frequency Reduction


Move loop invariant code outside of the loop.

// Program with loop variant inside loop

int main()

​ for (i = 0; i < n; i++) {

​ x = 10;

​ y = y + i;

​ }

​ return 0;

// Program with loop variant outside loop

int main()

​ x = 10;
​ for (i = 0; i < n; i++)

​ y = y + i;

​ return 0;

2. Induction Variable Elimination:


Eliminate various unnecessary induction variables used in the loop.

// Program with multiple induction variables

int main()

​ i1 = 0;

​ i2 = 0;

​ for (i = 0; i < n; i++) {

​ A[i1++] = B[i2++];

​ }

​ return 0;

// Program with one induction variable

int main()

​ for (i = 0; i < n; i++) {

​ A[i] = B[i]; // Only one induction variable

​ }

​ return 0;

}
3. Loop Merging/Combining:
If the operations performed can be done in a single loop then, merge or combine the loops.

// Program with multiple loops

int main()

​ for (i = 0; i < n; i++)

​ A[i] = i + 1;

​ for (j = 0; j < n; j++)

​ B[j] = j - 1;​

return 0;

// Program with one loop when multiple loops are merged

int main()

​ for (i = 0; i < n; i++) {

​ A[i] = i + 1;

​ B[i] = i - 1;

​ }

​ return 0;

4. Loop Unrolling:
If there exists simple code which can reduce the number of times the loop executes then, the loop can be
replaced with these codes.

// Program with loops

int main()
{

​ for (i = 0; i < 3; i++)

​ cout << "Cd";

​ return 0;

// Program with simple code without loops

int main()

​ cout << "Cd";

​ cout << "Cd";

​ cout << "Cd";

​ return 0;

Basic Blocks:

A basic block is a simple combination of statements. Except for entry and exit, the basic blocks do not have any
branches like in and out. It means that the flow of control enters at the beginning and it always leaves at the end
without any halt. The execution of a set of instructions of a basic block always takes place in the form of a
sequence.

The first step is to divide a group of three-address codes into the basic block. The new basic block always begins
with the first instruction and continues to add instructions until it reaches a jump or a label. If no jumps or labels
are identified, the control will flow from one instruction to the next in sequential order.
The algorithm for the construction of the basic block is described below step by step:

Algorithm: The algorithm used here is partitioning the three-address code into basic blocks.

Flow Graph:
A flow graph is simply a directed graph. For the set of basic blocks, a flow graph shows the flow of control
information. A control flow graph is used to depict how the program control is being parsed among the blocks.
A flow graph is used to illustrate the flow of control between basic blocks once an intermediate code has been
partitioned into basic blocks. When the beginning instruction of the Y block follows the last instruction of the X
block, an edge might flow from one block X to another block Y.

Let’s make the flow graph of the example that we used for basic block formation:

Flow Graph for above Example

Firstly, we compute the basic blocks (which is already done above). Secondly, we assign the flow control
information.

Dead Code Elimination:

In the world of software development, optimizing program efficiency and maintaining clean code are crucial
goals. Dead code elimination, an essential technique employed by compilers and interpreters, plays a significant
role in achieving these objectives. This article explores the concept of dead code elimination, its importance in
program optimization, and its benefits. We will delve into the process of identifying and eliminating dead code,
emphasizing original content to ensure a plagiarism-free article.
Understanding Dead Code
Dead code refers to sections of code within a program that is never executed during runtime and has no impact
on the program’s output or behavior. Identifying and removing dead code is essential for improving program
efficiency, reducing complexity, and enhancing maintainability.

Benefits of Dead Code Elimination


· Enhanced Program Efficiency: By removing dead code, unnecessary computations and memory usage
are eliminated, resulting in faster and more efficient program execution.
· Improved Maintainability: Dead code complicates the understanding and maintenance of software
systems. By eliminating it, developers can focus on relevant code, improving code readability, and
facilitating future updates and bug fixes.
· Reduced Program Size: Dead code elimination significantly reduces the size of executable files,
optimizing resource usage and improving software distribution.
Process of Dead Code Elimination
Dead code elimination is primarily performed by compilers or interpreters during the compilation or
interpretation process. Here’s an overview of the process:
· Static Analysis: The compiler or interpreter analyzes the program’s source code or intermediate
representation using various techniques, including control flow analysis and data flow analysis.
· Identification of Dead Code: Through static analysis, the compiler identifies sections of code that are
provably unreachable or have no impact on the program’s output.
· Removal of Dead Code: The identified dead code segments are eliminated from the final generated
executable, resulting in a more streamlined and efficient program.

Loop Optimization in Compiler Design:

Loop Optimization is the process of increasing execution speed and reducing the overheads associated with
loops. It plays an important role in improving cache performance and making effective use of parallel processing
capabilities. Most execution time of a scientific program is spent on loops.
Loop Optimization is a machine independent optimization. Whereas Peephole optimization is a machine
dependent optimization technique.
Decreasing the number of instructions in an inner loop improves the running time of a program even if the
amount of code outside that loop is increased.

Loop Optimization Techniques:


In the compiler, we have various loop optimization techniques, which are as follows:

1. Code Motion (Frequency Reduction)

In frequency reduction, the amount of code in the loop is decreased. A statement or expression, which can be
moved outside the loop body without affecting the semantics of the program, is moved outside the loop.
Example:
Before optimization:
while(i<100)
{
a = Sin(x)/Cos(x) + i;
i++;
}

After optimization:

t = Sin(x)/Cos(x);
while(i<100)
{
a = t + i;
i++;
}

2. Induction Variable Elimination

If the value of any variable in any loop gets changed every time, then such a variable is known as an induction
variable. With each iteration, its value either gets incremented or decremented by some constant value.

Example:
Before optimization:
B1
i:= i+1
x:= 3*i
y:= a[x]
if y< 15, goto B2
In the above example, i and x are locked, if i is incremented by 1 then x is incremented by 3. So, i and x are
induction variables.

After optimization:
B1
i:= i+1
x:= x+4
y:= a[x]
if y< 15, goto B2

3. Strength Reduction

Strength reduction deals with replacing expensive operations with cheaper ones like multiplication is costlier
than addition, so multiplication can be replaced by addition in the loop.

Example:
Before optimization:
while (x<10)
{
y := 3 * x+1;
a[y] := a[y]-2;
x := x+2;
}
After optimization:
t= 3 * x+1;
while (x<10)
{
y=t;
a[y]= a[y]-2;
x=x+2;
t=t+6;
}

4. Loop Invariant Method

In the loop invariant method, the expression with computation is avoided inside the loop. That computation is
performed outside the loop as computing the same expression each time was overhead to the system, and this
reduces computation overhead and hence optimizes the code.
Example:
Before optimization:
for (int i=0; i<10;i++)
t= i+(x/y);
...
end;

After optimization:
s = x/y;
for (int i=0; i<10;i++)
t= i+ s;
...
end;

5. Loop Unrolling

Loop unrolling is a loop transformation technique that helps to optimize the execution time of a program. We
basically remove or reduce iterations. Loop unrolling increases the program’s speed by eliminating loop control
instruction and loop test instructions.
Example:
Before optimization:

for (int i=0; i<5; i++)


​ printf("Pankaj\n");

After optimization:

printf("Pankaj\n");
printf("Pankaj\n");
printf("Pankaj\n");
printf("Pankaj\n");
printf("Pankaj\n");

6. Loop Jamming

Loop jamming is combining two or more loops in a single loop. It reduces the time taken to compile the many
loops.

Example:
Before optimization:

for(int i=0; i<5; i++)


​ a = i + 5;
for(int i=0; i<5; i++)
​ b = i + 10;

After optimization:

for(int i=0; i<5; i++)


{
a = i + 5;
b = i + 10;
}
7. Loop Fission

Loop fission improves the locality of reference, in loop fission a single loop is divided into multiple loops over
the same index range, and each divided loop contains a particular part of the original loop.

Example:
Before optimization:

for(x=0;x<10;x++) ​
{
a[x]=…
b[x]=…
}
After optimization:

for(x=0;x<10;x++)
a[x]=…
for(x=0;x<10;x++)
b[x]=…

8. Loop Interchange

In loop interchange, inner loops are exchanged with outer loops. This optimization technique also improves the
locality of reference.

Example:
Before optimization:

for(x=0;x<10;x++)
for(y=0;y<10;y++)
a[y][x]=…

After optimization:
for(y=0;y<10;y++)
for(x=0;x<10;x++)
a[y][x]=…

9. Loop Reversal

Loop reversal reverses the order of values that are assigned to index variables. This help in removing
dependencies.

Example:
Before optimization:

for(x=0;x<10;x++)
a[9-x]=…

After optimization:

for(x=9;x>=0;x--)
a[x]=…

10. Loop Splitting

Loop Splitting simplifies a loop by dividing it into numerous loops, and all the loops have some bodies but they
will iterate over different index ranges. Loop splitting helps in reducing dependencies and hence making code
more optimized.

Example:
Before optimization:

for(x=0;x<10;x++)
if(x<5)
a[x]=…
else
b[x]=…

After optimization:
for(x=0;x<5;x++)
a[x]=…
for(;x<10;x++)
b[x]=…

11. Loop Peeling

Loop peeling is a special case of loop splitting, in which a loop with problematic iteration is resolved separately
before entering the loop.

Before optimization:

for(x=0;x<10;x++)
if(x==0)
a[x]=…
else
b[x]=…

After optimization:

a[0]=…
for(x=1;x<100;x++)
b[x]=…

12. Unswitching

Unstitching moves a condition out from inside the loop, this is done by duplicating loop and placing each of its
versions inside each conditional clause.

Before optimization:

for(x=0;x<10;x++)
if(s>t)
a[x]=…
else
b[x]=…

After optimization:

if(s>t)
for(x=0;x<10;x++)
a[x]=…
else
for(x=0;x<10;x++)
b[x]=…

CODE IMPROVING TRANSFORMATIONS:

Algorithms for performing the code improving transformations rely on data-flow information. Here we consider
common sub-expression elimination, copy propagation and transformations for moving loop invariant
computations out of loops and for eliminating induction variables. Global transformations are not substitute for
local transformations; both must be performed.

Elimination of global common sub expressions:

• The available expressions data-flow problem discussed in the last section allows us to determine if an
expression at point p in a flow graph is a common sub-expression. The following algorithm formalizes the
intuitive ideas presented for eliminating common sub-expressions.

ALGORITHM: Global common sub expression elimination.

INPUT: A flow graph with available expression information. OUTPUT: A revised flow graph.
METHOD: For every statement s of the form x := y+z6 such that y+z is available at the beginning of block and
neither y nor r z is defined prior to statements in that block, do the following.

1. To discover the evaluations of y+z that reach s’s block, we follow flow graph edges, searching
backward from s’s block. However, we do not go through any block that evaluates y+z. The last
evaluation of y+z in each block encountered is an evaluation of y+z that reaches s.
2. Create a new variable u.
3. Replace each statement w: =y+z found in (1) by
a. u:=y+z
b. w:=u
4. Replace statements by x:=u.

Data flow analysis in Compiler:

It is the analysis of flow of data in control flow graph, i.e., the analysis that determines the information
regarding the definition and use of data in program. With the help of this analysis, optimization can be done. In
general, its process in which values are computed using data flow analysis. The data flow property represents
information that can be used for optimization.

Data flow analysis is a technique used in compiler design to analyze how data flows through a program. It
involves tracking the values of variables and expressions as they are computed and used throughout the
program, with the goal of identifying opportunities for optimization and identifying potential errors.

The basic idea behind data flow analysis is to model the program as a graph, where the nodes represent program
statements and the edges represent data flow dependencies between the statements. The data flow information is
then propagated through the graph, using a set of rules and equations to compute the values of variables and
expressions at each point in the program.

Some of the common types of data flow analysis performed by compilers include:

1. ​ Reaching Definitions Analysis: This analysis tracks the definition of a variable or expression and
determines the points in the program where the definition “reaches” a particular use of the variable or
expression. This information can be used to identify variables that can be safely optimized or eliminated.
2. ​ Live Variable Analysis: This analysis determines the points in the program where a variable or
expression is “live”, meaning that its value is still needed for some future computation. This information can
be used to identify variables that can be safely removed or optimized.
3. ​ Available Expressions Analysis: This analysis determines the points in the program where a particular
expression is “available”, meaning that its value has already been computed and can be reused. This
information can be used to identify opportunities for common subexpression elimination and other
optimization techniques.
4. ​ Constant Propagation Analysis: This analysis tracks the values of constants and determines the points in
the program where a particular constant value is used. This information can be used to identify opportunities
for constant folding and other optimization techniques.

Data flow analysis can have a number of advantages in compiler design, including:

1. ​ Improved code quality: By identifying opportunities for optimization and eliminating potential errors,
data flow analysis can help improve the quality and efficiency of the compiled code.
2. ​ Better error detection: By tracking the flow of data through the program, data flow analysis can help
identify potential errors and bugs that might otherwise go unnoticed.
3. ​ Increased understanding of program behavior: By modeling the program as a graph and tracking the flow
of data, data flow analysis can help programmers better understand how the program works and how it can
be improved.

Basic Terminologies – ​

· Definition Point: a point in a program containing some definition.


· Reference Point: a point in a program containing a reference to a data item.
· Evaluation Point: a point in a program containing evaluation of expression.

Data Flow Properties – ​

· Available Expression – An expression is said to be available at a program point x if along paths its
reaching to x. An Expression is available at its evaluation point. ​
An expression a+b is said to be available if none of the operands gets modified before their use.
Example – ​

· Advantage – ​
It is used to eliminate common sub expressions.

· Reaching Definition – A definition D is reaches a point x if there is path from D to x in which D is not
killed, i.e., not redefined.
Example – ​

· Advantage – ​
It is used in constant and variable propagation.

· Live variable – A variable is said to be live at some point p if from p to end the variable is used before it
is redefined else it becomes dead.
Example – ​

· Advantage –
1. ​ It is useful for register allocation.
2. ​ It is used in dead code elimination.
·
Busy Expression – An expression is busy along a path if its evaluation exists along that path and none of its
operand definition exists before its evaluation along the path.

Advantage – ​
It is used for performing code movement optimization. ​

Features :

Identifying dependencies: Data flow analysis can identify dependencies between different parts of a program,
such as variables that are read or modified by multiple statements.
Detecting dead code: By tracking how variables are used, data flow analysis can detect code that is never
executed, such as statements that assign values to variables that are never used.
Optimizing code: Data flow analysis can be used to optimize code by identifying opportunities for common
subexpression elimination, constant folding, and other optimization techniques.
Detecting errors: Data flow analysis can detect errors in a program, such as uninitialized variables, by tracking
how variables are used throughout the program.
Handling complex control flow: Data flow analysis can handle complex control flow structures, such as loops
and conditionals, by tracking how data is used within those structures.
Interprocedural analysis: Data flow analysis can be performed across multiple functions in a program, allowing
it to analyze how data flows between different parts of the program.
Scalability: Data flow analysis can be scaled to large programs, allowing it to analyze programs with many
thousands or even millions of lines of code.

Symbolic Analysis in Compiler Design:

Symbolic analysis helps in expressing program expressions as symbolic expressions. During program execution,
functional behavior is derived from the algebraic representation of its computations. Generally, during normal
program execution, the numeric value of the program is computed but the information on how they are achieved
is lost. Thus symbolic analysis helps us understand the relationship between different computations. It greatly
helps in optimizing our program using optimizing techniques such as constant propagation, strength reduction,
and eliminating redundant computations. It helps us understand and illustrate the region-based analysis of our
program. Symbolic analysis helps us in optimization, parallelization, and understanding the program.
Example:
·

#include <iostream>

using namespace std;

int main()

int a, b, c;

cin >> a;

b = a + 1;

c = a - 1;

if (c > a)

c = c + 1;
return 0;

In the above code using symbolic analysis we can figure out that “if(c>a)” is never true and the line “c=c+1”
is never executed, hence allows the optimizer to remove this block of code

1. Affine Expressions:

An affine function is a linear function. In the symbolic analysis, we try to express variables as affine expressions
of reference variables whenever possible. Affine expressions are mostly used in array indexing, hence helping in
understanding the optimization and parallelization of our program.

An affine expression can also be written in terms of a number of iterations in our program, this is often termed
as an induced variable.

·
#include <iostream>

using namespace std;

int main()

int a[1000];

for (int induced_loop = 1; induced_loop <= 10;

induced_loop++) {

int induced_var = induced_loop * 10;

a[induced_var] = 0;

return 0;

Output

induced_var takes values 10,20,30….100. induced_loop takes values 1,2,3…10. Hence both induced_loop and
induced_var are induction variables of this loop.
The above program can be optimized using the strength reduction method, where we try to replace the
multiplication operation with addition, which is a less costly operation.
Optimized Code:
·
#include <iostream>

using namespace std;

int main()

int a[1000];

int induced_var = 0;

for (int induced_loop = 1; induced_loop <= 10;

induced_loop++) {

induced_var += 10;

a[induced_var] = 0;

return 0;

Sometimes it becomes impossible to express the value held by a variable after a function call as a linear
function, but we can determine other properties of that variable using the symbolic analysis, such as the
comparison between two variables as shown in the example below.

·
#include <iostream>

using namespace std;

int sum() { return 10; }

int main()

int a = sum();

int b = a + 10;

int c = a + 11;

return 0;

Using symbolic analysis we can clearly state that value of variable a > b

2. Data-Flow Problem:

This helps us understand both where variable values are required to be held and also counting the iteration in a
loop. This technique uses symbolic maps, it acts like a function that maps all the variables within the program
with a value. Consider the code below

· C++
#include <iostream>

using namespace std;

int main()

int gfg = 0; // start of region 1

for (int outer = 100; outer <= 200;

outer++) { // start of region 2

gfg++;

int temp_outer = gfg * 10;

int var = 0;

for (int inner = 10; inner <= 20;

inner++) { // start of region 3

int temp_inner = temp_outer + var;

var++;

} // end of region 3

} // end of region 2
} // end of region 1

Using data flow analysis we try to divide our program into different regions. Then we map variables of our
program to a value using the symbolic maps, analyze the program and further reduce them to affine expressions.
We also try to keep the block variables exclusive. In the above example we see the variable temp_outer is being
used in region 3, actually belongs to region 2, so we try to get rid of it after understating its nature from the
symbol mapping of our program. Also, we try to reduce any kind of operations if possible within our program.
Hence the code can be reduced to:
·
#include <iostream>

using namespace std;

int main()

int gfg = 0; // start of region 1

int i;

int j;

for (int outer = 1; outer <= 100;

outer++) { // start of region 2

gfg = i;

int temp_outer = gfg * 10;

int var = 0;

for (int inner = 10; inner <= 20;

inner++) { // start of region 3

int temp_inner = 10 * i + j - 1;

var = j;
} // end of region 3

} // end of region 2

} // end of region 1

We try to get rid of the data flow problem using the block transfer function, to the input function that is
mentioned above.

3. Region-Based Symbolic Analysis:

Region-based analysis has two parts bottom-up pass and top-down pass. The bottom-up pass helps in analyzing
a region when a symbolic map is passed at the entry by the transfer function and the output symbolic map at the
exit. Whereas in top-down pass the values of symbol maps are passed to the inner loop of our program.

Recourses:

1. https://byjus.com/gate/code-optimization-in-compiler-design-notes/

2. https://www.google.com/search?sca_esv=e2e4f9e6ebe0d775&rlz=1C1GCEA_

3. https://www.geeksforgeeks.org/optimization-of-basic-blocks/

4. https://www.javatpoint.com/flow-graph

5. https://www.geeksforgeeks.org/loop-optimization-in-compiler-design/

6. https://www.geeksforgeeks.org/data-flow-analysis-compiler/

7. https://www.brainkart.com/article/Code-Improvig-Transformations_8115/#google_vignette

8. https://www.geeksforgeeks.org/symbolic-analysis-in-compiler-design/

You might also like