CD Module 5 Answers
CD Module 5 Answers
Generation - Answers
Part A (3 Marks)
1. With suitable example explain induction variable elimination technique for
loop optimization.
Induction variable elimination is a loop optimization technique that identifies and eliminates
induction variables from loops to improve efficiency. An induction variable is a variable whose
value changes by a fixed amount in each iteration of a loop.
Example:
In this loop, both i and j are induction variables. The variable j depends on i and increases by 4 in
each iteration. We can eliminate j by substituting its value directly:
This optimization eliminates the redundant computation of j in each iteration, reducing the
number of operations and improving loop efficiency.
1. Dead Code Elimination: Removing statements that have no effect on program output
2. Copy Propagation: Including variable propagation and constant propagation
3. Common Subexpression Elimination: Identifying and reusing already computed
expressions
4. Strength Reduction: Replacing expensive operations with cheaper ones
5. Constant Folding: Computing constant expressions at compile time
6. Interchange of Independent Statements: Reordering statements that do not depend on
each other
These transformations preserve the basic block structure while improving code efficiency.
MOV a, R0
MOV R0, b
JMP L2
11. Use of Machine Idioms: Using specialized machine instructions. Example: Using INC a
instead of a = a + 1
1. Instruction Selection: The code generator must map the intermediate representation (IR)
into a code sequence that can be executed by the target machine. The complexity depends
on the level of IR, the nature of the instruction-set architecture, and the desired quality of
the generated code. If the target machine has special instructions like increment (INC),
then a = a+1 can be implemented more efficiently using INC a rather than a sequence of
load, add, and store instructions.
2. Register Allocation: Efficient use of registers is crucial for generating good code. The
use of registers should be coordinated in such a way that a minimal number of loads and
stores are generated. The code generator must decide what values should remain in
registers and when registers need to be stored to memory. Poor register allocation can
significantly degrade performance.
3. Choice of Evaluation Order: The order in which computations are performed can affect
the efficiency of the target code. Some evaluation orders require fewer registers and
instructions than others. Picking the best evaluation order is an NP-complete problem, but
heuristics can be used to find a good order.
Local Optimization:
Global Optimization:
Applied over a large segment of the program like loops, procedures, functions, etc.
Considers the flow of control between basic blocks
Examples include loop optimization techniques like code motion, induction variable
elimination, and loop unrolling
More complex to implement than local optimization
Provides more opportunities for optimization
Local optimization must be done before applying global optimization
Example:
In this code, the expression y + z is computed in each iteration but its value doesn't change
throughout the loop. Using code motion, we can move this computation outside the loop:
x = y + z;
for(i=0; i<n; i++) {
a[i] = x * i;
}
By moving the invariant expression y + z outside the loop, we compute it only once instead of n
times, significantly improving the performance for large loops.
(This question is a repeat of question 6. Please refer to the answer provided for question 6.)
10. Illustrate the role of register descriptor and address descriptor in code
generation phase.
In the code generation phase, register descriptors and address descriptors help manage the
allocation and usage of registers efficiently:
Register Descriptor:
Address Descriptor:
Keeps track of the location(s) where the current value of a variable can be found
For each variable, it records whether it is in a register, memory, or both
Helps in determining the best location to access a variable's value
Updated whenever a variable's value changes or is moved to a different location
Example:
t1 = a + b
t2 = c + d
t3 = t1 * t2
The descriptors help optimize register usage and minimize memory accesses.
11. How the peephole optimization technique makes its role in the compilation
process?
Peephole optimization plays an important role in the compilation process by improving the target
code through local transformations. It examines a small sequence of target instructions (the
peephole) and replaces them with a shorter or faster sequence whenever possible. Its role
includes:
1. Efficiency Improvement: It improves both execution speed and code size by eliminating
redundancies and simplifying instructions.
2. Machine-Dependent Optimization: It operates on the target code, allowing
optimizations specific to the target machine architecture.
3. Final Optimization Pass: It typically runs as one of the final optimization phases,
catching inefficiencies that may have been introduced by earlier code generation steps.
4. Local Transformations: It performs local transformations that might be missed by
global optimization techniques.
5. Specific Optimizations:
o Eliminates redundant load/store operations
o Removes unreachable code
o Performs flow of control optimizations
o Applies algebraic simplifications
o Replaces sequences with machine idioms
By performing these optimizations, peephole optimization helps produce more efficient target
code without changing the program's meaning.
a = b + c
d = b + c + e
Here, the expression b + c is computed twice. Using CSE, we can optimize this code as:
t = b + c
a = t
d = t + e
By computing b + c once and storing its result in a temporary variable t, we avoid the
redundant computation in the second statement. This optimization saves both execution time and
the number of operations performed.
Another example:
x = a * b + a * c
y = a * b + d
After applying CSE:
t = a * b
x = t + a * c
y = t + d
Example:
a = b + c
d = b + c + e
After CSE:
temp = b + c
a = temp
d = temp + e
This optimization reduces computation redundancy and improves execution speed. CSE
is particularly beneficial when the common subexpressions involve expensive operations
like multiplication, division, or function calls.
2. Dead Code Elimination: Dead code elimination removes statements that have no effect
on the program output.
Example:
x = 10;
if (0) {
y = x + 5; // This code is unreachable
}
a = b + 5;
a = c * 2; // The first assignment to 'a' is dead code
x = 10;
a = c * 2;
Example:
x = 5;
y = x + 3;
z = y * 2;
x = 5;
y = 8;
z = 16;
This reduces the number of operations at runtime and can enable further optimizations.
Techniques include:
x = y * z;
for (i = 0; i < n; i++) {
a[i] = x + i;
}
Basic Block:
A basic block is a sequence of consecutive three-address statements that satisfies the following
conditions:
Each basic block consists of a leader and all statements up to, but not including, the next leader
or the end of the program.
Structure preserving transformations optimize code within a basic block without changing its
structure or control flow. The main techniques include:
1. Dead Code Elimination: Removing statements that have no effect on the program
output.
Example:
x = a + b
y = c + d
x = p * q // The first assignment to x is dead
After optimization:
y = c + d
x = p * q
2. Copy Propagation:
o Variable propagation: Replacing occurrences of variables with their assigned
values
o Constant propagation: Replacing variables with their constant values
Example:
a = b
c = a + d // a can be replaced with b
a = b
c = b + d
Example:
t1 = a + b
t2 = c * d
t3 = a + b // Same as t1
After CSE:
t1 = a + b
t2 = c * d
t3 = t1
Example:
x = y * 2 // Multiplication
Example:
x = 5 + 3 * 2
Example:
a = b + c
x = y + z
d = a + b
After reordering:
a = b + c
d = a + b
x = y + z
These structure preserving transformations can be implemented efficiently using directed acyclic
graphs (DAGs), which represent the dependencies between operations and operands within the
basic block. By applying these optimizations, compilers can significantly improve code
execution without altering the basic control flow.
Example:
After optimization:
MOV a, R0
MOV R0, b
The third instruction is redundant since b's value is already in R0 from the second
instruction.
2. Remove Unreachable Code: Eliminates code that can never be executed due to program
flow.
Example:
void func() {
int a = 10, b = 20, c;
c = a * 10;
return c;
b = b * 15; // Unreachable
return b; // Unreachable
}
After optimization:
void func() {
int a = 10, c;
c = a * 10;
return c;
}
Example:
JMP L1
L1: JMP L2
After optimization:
JMP L2
This eliminates the intermediate jump by transferring control directly to the final
destination.
Examples:
o X = X * 1 → No operation (eliminated)
o X = X + 0 → No operation (eliminated)
o X = X * 2 → X = X << 1 (replace multiplication with shift)
o X = X * 0 → X = 0 (direct assignment)
5. Use of Machine Idioms: Replaces instruction sequences with specialized machine
instructions that perform the same operation more efficiently.
Example:
MOV a, R0
ADD #1, R0
MOV R0, a
If the target machine has an increment instruction, this can be replaced with:
INC a
This uses the machine's specialized instruction to perform the operation more efficiently.
These peephole optimization techniques, although local in scope, can significantly improve code
performance and size. They are particularly effective at cleaning up inefficiencies introduced by
earlier code generation phases.
The code generator takes as input the intermediate representation (IR) created by the
front end of the compiler, along with symbol table information. This IR can be in various
forms like postfix notation, three-address code, or syntax trees.
Key considerations:
The target machine architecture (RISC, CISC, or stack-based) significantly impacts the
difficulty of code generation. RISC machines with many registers and simple instruction
sets generally allow for easier code generation than CISC machines with complex
addressing modes and fewer registers.
2. Instruction Selection:
The code generator must map the IR operations into target machine instructions
efficiently. This process is influenced by:
If the IR is high-level, each IR statement may translate into multiple machine instructions
using code templates. If the IR reflects low-level details of the target machine, more
efficient code can be generated.
The uniformity and completeness of the target instruction set are important factors. When
focusing on efficiency, the code generator must consider:
o Instruction speed
o Machine idioms
o Special-purpose instructions
Example: For the three-address statement a = a+1, a naive translation might be:
MOV a, R0
ADD #1, R0
MOV R0, a
But if the target machine has an increment instruction, this can be replaced with the more
efficient:
INC a
Making these selections optimally is challenging and requires knowledge of both the IR
semantics and the target machine capabilities.
3. Register Allocation:
Efficient use of registers is crucial for generating high-quality code. The code generator
must decide:
The code generator typically uses register and address descriptors to keep track of what is
in each register and where each variable is stored (in registers, memory, or both).
Poor register allocation can significantly degrade code performance due to unnecessary
load and store operations.
These three issues are central to the design of an effective code generator. Addressing them well
leads to faster, more compact target code, while poor decisions can result in inefficient execution
despite good optimization in earlier phases.
Code optimization techniques can be broadly classified into machine-independent and machine-
dependent optimizations, as well as local and global optimizations. Here are the key optimization
techniques:
These are performed within a basic block (a sequence of statements with no branches in or out
except at the beginning and end).
After CSE:
temp = b + c
a = temp
d = temp + e
After optimization:
x = y
z = y + 1
x = 11
Dead Code Elimination: Removes code that has no effect on the program output.
x = 10
x = 20 // First assignment is dead
After optimization:
x = 20
b) Algebraic Transformations:
These optimizations are applied across basic blocks, particularly focusing on loops:
a) Code Motion (Loop Invariant Code Motion): Moves expressions that don't change within a
loop to outside the loop.
b) Induction Variable Elimination: Eliminates variables that change by a fixed amount in each
iteration.
After optimization:
x = 0;
for(i=0; i<n; i++) {
a[x] = b[i];
x = x + 4;
}
d) Loop Jamming (Loop Fusion): Combines similar loops to reduce loop overhead.
3. Machine-Dependent Optimizations:
a) Peephole Optimization: Examines a small window of target instructions and replaces them
with more efficient sequences.
d) Instruction Selection: Chooses the most efficient machine instructions for each operation.
These optimization techniques, when applied appropriately, can significantly improve program
performance by reducing execution time and memory usage. The effectiveness of each technique
depends on the program structure and the target machine architecture.
First, we'll convert the expression x = (a-b)+(a-c)+(a-c) into three-address code form:
t1 = a - b
t2 = a - c
t3 = a - c // Note: This is a common subexpression
t4 = t1 + t2
t5 = t4 + t3
x = t5
t1 = a - b
t2 = a - c
t4 = t1 + t2
t5 = t4 + t2 // Reusing t2 instead of computing t3
x = t5
Step 2: Generate Target Code
Assuming a simple machine with load, store, and arithmetic instructions, we can generate the
target code. Let's use registers R0, R1, and R2 for our code generation.
We can further optimize the code by keeping values in registers as much as possible:
1. Common Subexpression Elimination: We identified that (a-c) appears twice and computed it
only once.
2. Register Allocation: We used registers efficiently to minimize loads and stores.
3. Code Motion: Although not explicitly shown, we arranged the code to minimize register
pressure.
This demonstrates how a compiler's code generator translates high-level statements into efficient
machine code through intermediate representation and optimization techniques.
Solution:
The design of a code generator involves several important issues that affect the quality and
efficiency of the target code. Based on the provided module notes, the key design issues are:
The input consists of intermediate representation (IR) of the source program produced by the
front end, along with symbol table information.
Choices for intermediate language include postfix notation, three-address representations
(quadruples), virtual machine representations (stack machine code), and graphical
representations (syntax trees and DAGs).
The front end has already performed scanning, parsing, translation, and type checking before
code generation begins.
Code generation proceeds assuming the input is error-free.
2. Target Programs
The output can be absolute machine language, relocatable machine language, or assembly
language.
Absolute machine language can be placed in fixed memory locations and immediately executed.
Relocatable machine language allows separate compilation of subprograms, which can be linked
and loaded by a linking loader.
Assembly language output makes code generation somewhat easier.
The instruction set architecture significantly impacts code generation difficulty:
o RISC machines: Many registers, three-address instructions, simple addressing modes
o CISC machines: Few registers, two-address instructions, various addressing modes,
register classes, variable-length instructions
o Stack-based machines: Operations performed on stack elements
3. Memory Management
Variable names are mapped to addresses cooperatively by the front end and code generator.
Names and widths (storage sizes) are obtained from the symbol table.
Each three-address code is translated to addresses and instructions during code generation.
Relative addressing is used for instructions.
4. Instruction Selection
5. Register Allocation
Solution:
Basic block optimization involves applying various transformations to improve code efficiency
without changing program semantics. Below are key optimization techniques with examples:
1. Structure Preserving Transformations
A. Dead Code Elimination
Dead code refers to statements that compute values never used in subsequent computations or
that cannot be reached during execution.
t1 = a + b
t2 = c + d
x = t1 * 2
t1 = a + b
x = t1 * 2
B. Copy Propagation
t1 = a + b
t2 = t1
t3 = t2 * c
t1 = a + b
t3 = t1 * c
C. Common Subexpression Elimination
t1 = a + b
t2 = c * d
t3 = a + b
t4 = t3 * e
After elimination:
t1 = a + b
t2 = c * d
t4 = t1 * e
D. Strength Reduction
t1 = i * 8
t1 = i << 3
E. Constant Folding
t1 = 3 * 4
t2 = t1 + a
After folding:
t2 = 12 + a
F. Interchange of Independent Statements
t1 = a + b
t2 = c + d
t3 = t1 * t2
t2 = c + d
t1 = a + b
t3 = t1 * t2
t1 = a + b
t2 = a + b
t3 = t1 * c
t4 = t2 * d
A DAG would show that t1 and t2 compute the same expression, leading to:
t1 = a + b
t3 = t1 * c
t4 = t1 * d
3. Algebraic Transformations
Examples:
x + 0 → x (additive identity)
x * 1 → x (multiplicative identity)
x - x → 0 (self-subtraction)
x + x → 2 * x (addition to multiplication)
x * 2ⁿ → x << n (multiplication by power of 2)
Question 5.a: With suitable examples, explain the following loop optimization
techniques: (i) Code motion (ii) Induction variable elimination and (iii) Strength
reduction
Solution:
1. Code Motion
Code motion involves moving computations out of loops when their results don't change within
the loop, reducing unnecessary repeated calculations.
j = 0;
for(i = 0; i < n; i++) {
a[j] = b[i] + 1;
j = j + 4; // Update j directly
}
3. Strength Reduction
Strength reduction replaces expensive operations with equivalent but less expensive ones,
especially useful within loops.
x = 0;
for(i = 0; i < n; i++) {
a[i] = x + y;
x = x + 8; // Replace multiplication with addition
}
Here, the multiplication i * 8 is replaced with repeated addition. We initialize x = 0 and add 8
in each iteration. This changes the multiplication (a costly operation) to addition (a less
expensive operation).
Original:
In this case, multiplication by 4 is replaced with a left shift by 2 bits, which is typically faster on
most hardware.
Solution:
Peephole Optimization
Peephole optimization is a machine-dependent optimization technique that improves target code
by examining short sequences of instructions (called the "peephole") and replacing them with
more efficient sequences. It views a small window of instructions at a time and makes local
improvements.
The peephole acts as a small, moving window on the target program that scans a few instructions
at a time to find inefficiencies that can be improved.
This transformation eliminates unnecessary load and store instructions, particularly when values
are already in registers or when a value is stored and immediately reloaded.
Optimized code:
This transformation removes code that can never be executed due to program control flow.
MOV a, R0
JMP L2 ; Jump to label L2
MOV b, R1 ; This instruction is unreachable
ADD R1, R0 ; This instruction is unreachable
L2: SUB c, R0
Optimized code:
MOV a, R0
JMP L2 ; Jump to label L2
L2: SUB c, R0
JMP L1 ; Jump to L1
...
L1: JMP L2 ; L1 jumps to L2
Optimized code:
The code now jumps directly to L2, avoiding the intermediate jump.
4. Algebraic Simplifications
This transformation replaces sequences that compute common algebraic identities with simpler
sequences.
MOV a, R0
SUB R0, R0 ; R0 = R0 - R0
Optimized code:
Since X - X = 0 for any X, the subtraction can be replaced with a direct assignment.
This transformation replaces instruction sequences with specialized machine instructions that
perform the same task more efficiently.
MOV i, R0
ADD 1, R0 ; i = i + 1
MOV R0, i
Optimized code:
Using the specialized increment instruction is more efficient than loading, adding, and storing.
Question 6.a: Explain any three issues in the design of a code generator.
Solution:
This question is similar to question 4.a. Here I'll focus on three specific issues in the design of a
code generator:
1. Instruction Selection
Instruction selection is a fundamental issue in code generator design, involving mapping
intermediate representation (IR) to target machine instructions.
Key aspects:
The complexity of instruction selection depends on the IR level, the instruction set architecture,
and desired code quality.
If the IR is high-level, each statement might require multiple machine instructions, often using
code templates.
Low-level IR that reflects machine details can enable more efficient code generation.
The nature of the instruction set significantly affects selection difficulty:
o Uniformity: Regular patterns in instructions make selection easier
o Completeness: Having instructions for all required operations simplifies mapping
o Instruction speed: Different instructions have different execution costs
o Machine idioms: Special instructions for common operations can improve performance
Example: For the statement a = a + 1, different machine architectures offer different optimal
implementations:
2. Register Allocation
Register allocation determines which values stay in registers (fast access) and which are stored in
memory (slower access).
Key aspects:
The order in which expressions are evaluated affects register usage and code efficiency.
Key aspects:
Example: For the expression a + b * (c + d), two possible evaluation orders are:
The first approach uses fewer registers and is generally more efficient.
Solution:
t1 = A - B
t2 = A - C
t3 = A - C // This is a common subexpression
t4 = t1 + t2
t5 = t4 + t3
W = t5
t1 = A - B
t2 = A - C
t4 = t1 + t2
t5 = t4 + t2 // Reused t2 instead of computing t3
W = t5
Note: The final optimization assumes recognition that (A-C)+(A-C) = 2*(A-C), which might be
beyond most basic compilers but demonstrates advanced optimization potential.
Solution:
Common subexpression elimination identifies repeated expressions and computes them only
once, storing the result for reuse.
Process:
1. Identify expressions that are computed more than once with the same values
2. Store the result of the first computation in a temporary variable
3. Replace subsequent computations with references to that variable
After CSE:
t1 = a + b
t2 = c * d
t4 = t1 * e // Using t1 instead of recomputing a+b
Benefits:
Dead code elimination removes statements that compute values never used in subsequent
computations or that cannot be reached during execution.
a = b + c;
d = a * 2;
return d;
Benefits:
3. Loop Optimization
Loop optimization techniques improve the efficiency of loops, which are often the most time-
consuming parts of programs.
A. Code Motion
Code motion moves invariant computations outside loops to avoid repeated calculations.
Strength reduction replaces expensive operations with cheaper ones, particularly in loops.
Loop unrolling reduces loop overhead by duplicating the loop body and adjusting the iteration
count.
Solution:
In the given three-address code, we can identify the following common subexpressions:
Original code:
t1 = a+b
x = t1
t2 = a+b // Common subexpression - can reuse t1
t3 = a*c
b = t2
t4 = a*b
y = t4
After CSE:
t1 = a+b
x = t1
t2 = t1 // Reusing t1 instead of recomputing a+b
t3 = a*c
b = t2 // b now has the value of a+b (or t1)
t4 = a*b // Since b changed, this is now a*(a+b)
y = t4
0 + a b t1
1 = t1 - x
2 = t1 - t2
3 * a c t3
4 = t2 - b
5 * a b t4
6 = t4 - y
Note that in the original code, statement t2 = a+b was redundant as it recomputed a+b which
was already computed in t1. After common subexpression elimination, we directly assign t1 to
t2 without recomputation.
Solution:
x := op y: // Unary operation
getReg(x, y, null, op);
x := y: // Assignment
getReg(x, y, null, '=');
label L: // Label
generateLabel(L);
end case
end for
end procedure
The key component of this algorithm is the getReg function, which handles register allocation:
1. Register Descriptor: For each register, it keeps track of the variables currently held in it.
2. Address Descriptor: For each variable, it tracks all locations (registers and/or memory) where its
current value can be found.
These descriptors help optimize register usage and minimize load/store operations.
Dead code refers to statements that compute values not used later in the program or cannot be
reached during execution.
Example:
if (false)
x = y + 5; // This is dead code
a = b + 5; // This is unreachable code if placed after return statement
In this example, the "if" statement with a constant false condition is dead code, while the
assignment after a return statement would be unreachable code. Both can be safely eliminated.
Copy Propagation
Copy propagation involves replacing variables with their defined values throughout the code
where applicable.
Example:
a = b
c = a + d
a = b
c = b + d
Common Sub-expression Elimination
When the same expression appears multiple times, it can be computed once and reused.
Example:
t1 = a + b
t2 = a + b
t3 = t2 * c
After optimization:
t1 = a + b
t3 = t1 * c
Strength Reduction
This involves replacing expensive operations with equivalent but less costly ones.
Example:
x = y * 2
x = y + y // or x = y << 1
Constant Folding
Example:
x = 5 * 10
Reordering statements that don't depend on each other to optimize register usage or instruction
scheduling.
2. Algebraic Transformations
Commutative laws: x + y = y + x
Associative laws: (x + y) + z = x + (y + z)
Distributive laws: x * (y + z) = x * y + x * z
Identity laws: x + 0 = x, x * 1 = x
Constant folding: 2 + 3 = 5
Example:
x = y * 0
x = 0
9.a) For the following C statement, write the three-address code and quadruples:
S = A-B+C/D-E+F. Also convert the three-address code into machine code.
Three-address code:
t1 = A - B
t2 = C / D
t3 = t1 + t2
t4 = t3 - E
t5 = t4 + F
S = t5
Quadruples:
- A B t1
/ C D t2
+ t1 t2 t3
Op Arg1 Arg2 Result
- t3 E t4
+ t4 F t5
= t5 - S
Assuming a simple machine with MOV, ADD, SUB, DIV, and STORE instructions:
9.b) Write the Code Generation Algorithm and explain the getreg function.
The algorithm takes a sequence of three-address statements as input and produces target machine
code. For each three-address statement of the form x := y op z:
The getReg function determines which register to use for computation. It works as follows:
This function optimizes register usage by minimizing unnecessary load/store operations, which
are typically more expensive than register-to-register operations.
10.a) Write code generation algorithm. Using this algorithm generate code for
the expression x=(a-b)+(a+c)+(a+c)
t1 = a - b
t2 = a + c
t3 = t1 + t2
t5 = t3 + t2 // Reusing t2 instead of computing t4
x = t5
// t2 = a + c
MOV a, R1 // Load a into R1
ADD c, R1 // R1 = a + c (t2)
// t3 = t1 + t2
ADD R1, R0 // R0 = R0 + R1 = t1 + t2 (t3)
// t5 = t3 + t2 (reusing t2 in R1)
ADD R1, R0 // R0 = R0 + R1 = t3 + t2 (t5)
// x = t5
STORE R0, x // Store R0 to x
11.a) With suitable examples explain the following loop optimization techniques:
(i) Code motion (ii) Induction variable elimination and (iii) Strength reduction
Example:
This optimization reduces the number of operations from 2n to n+1 (one addition and n
multiplications).
Induction variables are variables that change by a fixed amount in each iteration. When multiple
induction variables exist with linear relationships, we can eliminate redundant ones.
Example:
This involves replacing expensive operations with cheaper ones, especially for induction
variables.
Example:
x = 0; // Initialize x to 0
for (i = 0; i < n; i++) {
a[x] = b[i];
x = x + 4; // Addition instead of multiplication
}
This replaces the multiplication operation in each iteration with an addition, which is typically
faster.
The optimization process typically involves creating a DAG representation of the basic block,
identifying optimization opportunities, and then generating improved code based on the
optimized DAG.
12.a) Convert to three-address code and write machine code for given statement:
x = a/b + a/b*(c-d)
Three-address code:
t1 = a / b
t2 = c - d
t3 = a / b // Common subexpression
t4 = t3 * t2
t5 = t1 + t4
x = t5
t1 = a / b
t2 = c - d
t4 = t1 * t2
t5 = t1 + t4
x = t5
Machine Code:
// t1 = a / b
MOV a, R0 // Load a into R0
DIV b, R0 // R0 = a / b (t1)
// t2 = c - d
MOV c, R1 // Load c into R1
SUB d, R1 // R1 = c - d (t2)
// t4 = t1 * t2
MOV R0, R2 // Copy R0 to R2 to preserve t1
MUL R1, R2 // R2 = R2 * R1 = t1 * t2 (t4)
// t5 = t1 + t4
ADD R2, R0 // R0 = R0 + R2 = t1 + t4 (t5)
// x = t5
STORE R0, x // Store R0 to x