Completed LL
Completed LL
OVERVIEW
A compiler translates the code written in one language to some other language
without changing the meaning off the program, or a program that translates an executable
program in one language into an executable program in another language. It is also expected
that a compiler should make the target code efficient and optimized in terms of time and
space. Compiler design principles provide an in--depth view of translation and optimization
processes.
Compiler design includes translation mechanism, error detection & recovery. It also
includes lexical, syntax, semantic analysis as front end, code generation and optimization as
back--end.
Computers are a balanced mix of software and hardware. Hardware refers to the
mechanical device and its functions which are being controlled by a compatible software.
Hardware understands binary instructions in the form of electronic charge, which is the
counterpart of binary language in software programming. Binary language has only two
alphabets, 0 and 1. To instruct the hardware, codes must be written in binary format, which
is simply a series of 1s and 0s. 0s. It would be a difficult and cumbersome task for computer
programmers to write such codes, which is why we have compilers to write such codes.
Features of Compilers:
▪ Correctness
▪ Speed of compilation
▪ Preserve the correct the meaning of the code
▪ The speed of the target code
▪ Recognize legal and illegal program constructs
▪ Good error reporting/handling
▪ Code debugging help
Preprocessor
A preprocessor, generally considered as a part of compiler, is a tool that produces
input for compilers. It deals with macro-processing, augmentation, file inclusion, language
extension, etc.
Interpreter
An interpreter, like a compiler, translates high-level language into low-level machine
language. The difference lies in the way they read the source code or input. A compiler reads
the whole source code at once, creates tokens, checks semantics, generates intermediate
code, executes the statement, and the whole program and may involve many passes. In
contrast, an interpreter reads a statement from the input, converts it to an intermediate code,
executes it, then takes the next statement in sequence. If an error occurs, an interpreter stops
execution and reports it. whereas a compiler reads the whole program even if it encounters
several errors.
Assembler
An assembler translates assembly language programs into machine code. The output
of an assembler is called an object file, which contains a combination of machine instructions
as well as the data required to place these instructions in memory.
Linker
Linker is a computer program that links and merges various object files together in
order to make an executable file. All these files might have been compiled by separate
assemblers. The major task of a linker is to search and locate referenced module/routines in
a program and to determine the memory location where these codes will be loaded, making
the program instruction to have absolute references.
Loader
Loader is a part of operating system and is responsible for loading executable files into
memory and execute them. It calculates the size of a program (instructions and data) and
creates memory space for it. It initializes various registers to initiate execution.
TYPES OF COMPILER
Following are the different types of Compiler:
• Single Pass Compilers
• Two Pass Compilers
• Multi-pass Compilers
▪ Single Pass Compiler: In single pass Compiler source code directly transforms
into machine code. For example, Pascal language.
▪ Two Pass Compiler: Two pass Compiler is divided into two sections, the Front end,
which maps legal code into Intermediate Representation (IR), and Back end, which
maps IR onto the target machine. The Two pass compiler method also simplifies
the retargeting process. It also allows multiple front ends.
Tasks of Compiler
Main tasks performed by the Compiler are:
• Breaks up the up the source program into pieces and impose grammatical structure
on them
• Allows you to construct the desired target program from the intermediate
representation and also create the symbol table
• Compiles source code and detects errors in it
• Manage storage of all variables and codes.
• Support for separate compilation
• Read, analyze the entire program, and translate to semantically equivalent
• Translating the source code into object code depending upon the type of machine
• Scanner generators: This tool takes regular expressions as input. For example, LEX for
Unix Operating System.
• Syntax-directed translation engines: These software tools off an intermediate code by
using the parse tree. It has a goal of associating one or more translations with each
node of the parse tree.
• Parser generators: A parser generator takes a grammar as input and automatically
generates source code which can parse streams of characters with the help of a
grammar.
• Automatic code generators: Takes intermediate code and converts them into Machine
Language
• Data-flow engines: This tool is helpful for code optimization. Here, information is
supplied by user and intermediate code is compared to analyze any relation. It is also
known as data-flow analysis. It helps you to find out how values are transmitted from
one part of the program to another part.
Uses of Compiler
• Compiler verifies entire program, so there are no syntax or semantic errors, the
executable file is optimized by the compiler, so it executes faster
• It allows you to create internal structure in memory, there is no need to execute the
program on the same machine it was built
• Translate entire program in other language, generate files on disk, link the files into
an executable format and check for syntax errors and data types
• Helps you to enhance your understanding of language semantics, and to handle
language performance issues
• Opportunity for a non-trivial programming project. The techniques used for
constructing a compiler can be useful for other purposes as well
• Its design helps full implementation of High-Level Programming Languages
• Support optimization for Computer Architecture Parallelism
• Design of New Memory Hierarchies of Machines
• Widely used for Translating Programs, and used with other software productivity tools
ARCHITECTURE
A compiler can broadly be divided into two phases based on the way they compile.
They are:
• Analysis Phase: This is known as the front-end of the compiler. This phase reads the
source program, divides it into core parts and then checks for lexical, grammar and
syntax errors. The phase generates an intermediate representation of the source
program and symbol table, which should be fed to the Synthesis phase as input.
• Synthesis Phase: Known as the back-end of the compiler, the synthesis phase
generates the target program with the help of intermediate source code
representation and symbol table. A compiler can have many phases and passes. A pass
refers to the traversal of a compiler through the entire program, while a Phase is a
distinguishable stage, which takes input from the previous stage, processes and yields
output that can be used as input for the next stage. stage. A pass can have more than
one phase.
The compilation process is a sequence of various phases. Each phase takes input from its
previous stage, has its own representation of source program, and feeds its output to the next
phase of the compiler. Let us understand the phases of a compiler.
Lexical Analysis
The first phase of scanner works as a text scanner. This phase scans the source code
as stream of characters and converts it into meaningful lexemes. Lexical represents these
lexemes in the form of tokens as:
<name<token-name, attribute-value>
Syntax Analysis
The next phase is called the syntax analysis or parsing. It takes the token produced by
lexical analysis as input and generates a parse tree (or syntax tree). In this phase, token
arrangements are checked against the source code grammar, i.e. the parser checks if the
expression made by the tokens is syntactically correct.
Semantic Analysis
Semantic analysis checks whether the parse tree constructed follows the rules of
language. For example, assignment of values is between compatible data types, and adding
string to an integer. Also, the semantic analyzer keeps track of identifiers, their types and
expressions; whether identifiers are declared before use or not etc. The semantic analyzer
produces an annotated syntax tree as an output.
Code Generation
In this phase, the code generator takes the optimized representation of the
intermediate code and maps it to the target machine language. The code generator translates
the intermediate code into a sequence of (generally) re-locatable machine code. Sequence of
instructions of machine code performs the task as the intermediate code would do.
Symbol Table
It is a data-structure which is maintained throughout all the phases of a compiler. All
the identifier's names along with their types are stored here. The symbol table makes it easier
for the compiler to quickly search the identifier record and retrieve it. The symbol table is also
used for scope management.
LEXICAL ANALYSIS
Lexical analysis is the first phase of a compiler. It takes modified source code from
language preprocessors that are written in the form of sentences. The lexical analyzer breaks
these syntaxes into a series of tokens, by removing any whitespace or comments in the source
code.
If the lexical analyzer finds a token invalid, it generates an error. The lexical analyzer
works closely with the syntax analyzer. It reads character streams from the source code,
checks for legal tokens, and passes the data to the syntax analyzer when it demands.
Programs that perform Lexical Analysis in compiler design are called lexical analyzers
or lexers. A lexer contains tokenizer or scanner. If the lexical analyzer detects that the token
is invalid, it generates an error. The role of Lexical Analyzer in compiler design is to read
character streams from the source code, check for legal tokens, and pass the data to the
syntax analyzer when it demands.
Tokens
Token is a sequence of characters which represents a unit of information in the source
program. A lexeme is a sequence of characters that are included in the source program
according to the matching pattern of a token. It is nothing but an instance of a token,
therefore it is sequence of characters (alphanumeric) in a token. There are some predefined
rules for every lexeme to be identified as a valid token. These rules are defined by grammar
rules, by means of a pattern. A pattern explains what can be a token, and these patterns are
defined by means of regular expressions.
In programming language, keywords, constants, identifiers, strings, numbers,
operators, punctuations, symbols can be considered as tokens. For example, in C language,
the variable declaration line:
int value = 100;
contains the tokens:
int (keyword), value (identifier), = (operator), 100 (constant) and ; (symbol).
Specifications of Tokens
Let us understand how the language theory undertakes the following terms:
Alphabets
Any finite set of symbols {0,1} is a set of binary alphabets, {0,1,2,3,4,5,6,7,8,9, A, B, C,
D, E, F} is set of Hexadecimal alphabets, {a-z, A-Z} is a set of English language alphabets.
Strings
Any finite sequence of alphabets (characters) is called a string. Length of the string is
the total number of occurrence of alphabets, e.g., the length of the string mystudent is 9 and
is denoted by |mystudent| = 9. A string having no alphabets, i.e. a string of zero length is
known as an empty string and is denoted by ε (epsilon).
Special symbols
A typical high-level language contains the following symbols: -
Language
A language is considered as a finite set of strings over some finite set of alphabets.
Computer languages are considered as finite sets, and mathematical set operations that can
be performed on them. Finite languages can be described by means of regular expressions.
Regular Expressions
The lexical analyzer needs to scan and identify only a finite set of valid
string/token/lexeme that belong to the language in hand. It searches for the pattern defined
by the language rules. The following figure explain how lexical analyser does its work.
Finite Automata
Finite automata is a state machine that takes a string of symbols as input and changes
its state accordingly. Finite automata or a finite state machine has five elements or tuple. It
has set of rules for moving from one state to another but depends upon the applied input
symbol. When a regular expression is fed into finite automata, it changes its state for each
literal. If the input string is successfully processed and the automata reaches its final state, it
is accepted, i.e., the string just fed was said to be a valid token of the language in hand. The
mathematical model of finite automata consists of:
• Finite set of states (Q)
• Finite set of input symbols (Σ)
• One Start state (q0)
• Set of final states (qf)
• Transition function (δ)
The transition function (δ) maps the finite set of state (Q) to a finite set of input symbols (Σ),
given as Q ×Σ QΣ ➔ Q
Finite Automata Construction
Let L(r) be a regular language recognized by some finite automata (FA).
• States: States of FA are represented by circles. State names are written inside the
circle.
• Start state: The state from where the automata start, is known as start state. Start
state has an arrow pointed towards it.
• Intermediate states: All intermediate states have at least two arrows; one pointing to
and another pointing out from them.
• Final state: If the input string is successfully parsed, the automata is expected to be in
this state. Final state is represented by double circles. It may have any odd number of
arrows pointing to it and even number of arrows pointing out from it. The number of
odd arrows are one greater than even, i.e. odd = even+1.
• Transition: The transition from one state to another state happens when a desired
symbol in the input is found. Upon transition, automata can either move to next state
or stay in the same state. Movement from one state to another is shown as a directed
arrow, where the arrows points to the destination state. If automata stays on the same
state, an arrow pointing from a state to itself is drawn.
© 2024 Computer Science Department HAFED Poly Kazaure Page 11 of 29
COM 413 Compiler Construction Adamu Isah
Lexical Errors
A character sequence which is not possible to scan into any valid token is a lexical
error. Important facts about lexical error is that they not very common, but it should be
managed by a scanner. Misspelling of identifiers, operators, keyword are considered as lexical
errors. Generally, a lexical error is caused by the appearance of some illegal character, mostly
at the beginning of a token.
SYNTAX ANALYSIS
Syntax analysis or Parsing is the second phase of a compiler. We have seen that a
lexical analyzer can identify tokens with the help of regular expression and pattern rules. But
a lexical analyzer cannot check the syntax of a given sentence due to the limitations of the
regular expressions. Regular expressions cannot check balancing tokens, such as parenthesis.
Therefore, this phase uses context-free grammar (CFG), which is recognized by push-down
automata. CFG, on the other hand, is a superset of Regular Grammar, as depicted below:
It implies that every Regular Grammar is also context-free, but there exist some
problems, which are beyond the scope of Regular Grammar. CFG is a helpful tool in describing
the syntax of programming languages.
Context Free Grammar
A context-free grammar CFG (or just a grammar) G is a tuple G = (V, T, P, S) where:
▪ V is the (finite) set of variables (or non-terminals or syntactic categories). Each
variable represents a language, i.e., a set of strings,
▪ T is a finite set of terminals, i.e., the symbols that form the strings of the language
being defined
▪ P is a set of production rules that represent the recursive definition of the
language.
▪ S is the start symbol that represents the language being defined.
Other variables represent auxiliary classes of strings that are used to help define the language
of the start symbol.
Each production rule consists of:
▪ A variable that is being (partially) defined by the production. This variable is often
called the head of the production.
▪ The production symbol →.
▪ A string of zero or more terminals and variables. This string, called the body of the
production, represents one way to form strings in the language of the variable of the
head. In doing so, we leave terminals unchanged and substitute for each variable of
the body any string that is known to be in the language of that variable.
We often refer to the production whose head is A as “productions for A” or “A-productions”.
Moreover, the productions:
A → α1, A → α2 . . .A → αn, can be replaced by the notation
A → α1 | α2 | . . . | αn
The strings are derived from the start symbol by repeatedly replacing a non-terminal
(initially start symbol) by the right side of a production, for that non-terminal.
Example:
Syntax Analyzers
A syntax analyzer or parser takes the input from a lexical analyzer in the form of token
streams. The parser analyzes the source code (token stream) against the production rules to
detect any errors in the code. The output of this phase is a parse tree.
This way, the parser accomplishes two tasks, i.e., parsing the code, looking for errors and
generating a parse tree as the output of the phase.
Parsers are expected to parse the whole code even if some errors exist in the program.
Parsers use error recovering strategies.
Derivation
A derivation is basically a sequence of production rules, in order to get the input string.
During parsing, we take two decisions for some sentential form of input:
▪ Deciding the non-terminal which is to be replaced.
▪ Deciding the production rule, by which, the non-terminal will be replaced.
To decide which non-terminal to be replaced with production rule, we can have two
options.
Left-most Derivation
If the sentential form of an input is scanned and replaced from left to right, it is called left-
most derivation. The sentential form derived by the left-most derivation is called the left-
sentential form.
Right-most Derivation
If we scan and replace the input with production rules, from right to left, it is known as
right derivation. The sentential form derived from the right-most derivation is called the right-
sentential form.
Example:
Production rules:
Input string: id + id * id
The left-most derivation is:
Parse Tree
A parse tree is a graphical depiction of a derivation. It is convenient to see how strings
are derived from the start symbol. The start symbol of the derivation becomes the root of the
parse tree. Example, we take the left-most derivation of a + b * c, the left-most derivation is:
SEMANTIC ANALYSIS
Semantic Analysis makes sure that declarations and statements of program are
semantically correct. It is a collection of procedures which is called by parser as and when
required by grammar. Both syntax tree of previous phase and symbol table are used to check
the consistency of the given code. Type checking is an important part of semantic analysis
where compiler makes sure that each operator has matching operands.
It uses syntax tree and symbol table to check whether the given program is
semantically consistent with language definition. It gathers type information and stores it in
either syntax tree or symbol table. This type information is subsequently used by compiler
during intermediate-code generation.
Semantic Errors:
Errors recognized by semantic analyzer are as follows:
• Type mismatch,
• Undeclared variables
• Reserved identifier misuse
• Multiple declaration of variable in a scope.
• Accessing an out of scope variable.
• Actual and formal parameter mismatch,
Functions of Semantic Analysis:
• Type Checking – Ensures that data types are used in a way consistent with their
definition.
• Label Checking – A program should contain labels references.
• Flow Control Check – Keeps a check that control structures are used in a proper
manner, (example: no break statement outside a loop).
Example:
float x = 10.1;
float y = x*30;
In the above example integer 30 will be type casted to float 30.0 before multiplication,
by semantic analyzer.
Semantics
Semantics of a language provide meaning to its constructs, like tokens and syntax
structure. Semantics help interpret symbols, their types, and their relations with each other.
Semantic analysis judges whether the syntax structure constructed in the source program
derives any meaning or not.
CFG + semantic rules = Syntax Directed Definitions
For example: int a = “value”;
Should not issue an error in lexical and syntax analysis phase, as it is lexically and
structurally correct, but it should generate a semantic error as the type of the assignment
differs. These rules are set by the grammar of the language and evaluated in semantic
analysis.
Synthesized attributes
These attributes get values from the attribute values of their child nodes. To illustrate,
the following production:
S→ABC
If S is taking values from its child nodes (A, B, C), then it is said to be a synthesized
attribute, the values of ABC are synthesized to S.
For example, if (E → E + T), the parent node E gets its value from its child node.
Synthesized attributes never take values from their parent nodes or any sibling nodes.
Inherited attributes
In contrast to synthesized attributes, inherited attributes can take values from parent
and/or siblings. siblings. As in the following production,
S →ABC
A can get values from S, B and C. B can take values from S, A, and C. Likewise, C can
take values from S, A, and B.
Intermediate Code Generation
Intermediate code is the interface between front-end and backend in a compiler.
Ideally the details of source language are confined to the front-end and the details of target
machines to the back-end.
A source code can directly be translated into its target machine code, then why is it
necessary to translate the source code into an intermediate code which is then translated to
its target code?
• If a compiler translates the source language to its target machine language without
having the option for generating intermediate code, then for each new machine,
a full native compiler is required
• Intermediate code eliminates the need of a new full compiler for every unique
machine by keeping the analysis portion same for all the compilers
• ICG receives input from its predecessor phase and semantic analyser phase. It
takes input in the form annotated syntax tree. Using the syntax tree, the second
phase of the compiler is changed according to the target machine.
• The second part of compiler, synthesis, is changed according to the target machine
• It becomes easier to apply the source code modifications to improve code
performance by applying code optimization techniques on the intermediate code
While generating machine code directly from source code is possible, it entails problems:
• With m languages and n target machines, we need to write m front ends, m x n
optimizers, and m x n code generators,
• The code optimizer which is one of the largest and very difficult to-write components
of a compiler, cannot be reused
• By converting source code to an intermediate code, a machine-independent code
optimizer may be written,
• This means just m front ends, n code generators and 1 optimizer
Intermediate Representation
Intermediate codes can be represented in two ways and they have their own benefits.
The first is High Level IR or High-level intermediate code representation. This is very close to
the source (code) language itself. They can be easily generated from the source code and can
easily apply code modifications to enhance performance. But for target machine
optimization, it is less preferred.
The other is Low Level IR or Low Level Intermediate Code representation. It is close to the
target machine, which makes it suitable for register and memory allocation, instruction set
selection, etc. It is good for machine-dependent optimizations.
• Intermediate code can be either language specific (e.g., Byte Code for Java) or
language independent (three-address code).
• It must be easy to produce and easy to translate to machine code.
• A sort of universal assembly language and should not contain any machine-
specific parameters (registers, addresses, etc.).
• it is represented in three-address space but the type of intermediate code
implementation is based on the compiler designer.
• Quadruples, triples, indirect triples are the classical forms used for machine-
independent optimizations and machine code generation
• Static Single Assignment form (SSA) is a recent form and enables more effective
optimizations
Forms of ICG
• Three address Code
• Abstract Syntax Tree
• Polish Notation
Three-Address Code
• Instructions are very simple: LHS is the target and the RHS has at most two sources
and one operator
• RHS sources can be either variables or constants
• Examples: a = b + c, x = -y, if a > b go to L1
• Three-address code is a generic form and can be implemented as quadruples,
triples, indirect triples
• Example: The three-address code for (a + b*c) - (d/(b*c)) is below
The intermediate code produced from DAG (Direct Acyclic Graph) is more compact as
compared AST, i.e.:
a = b * (- c) + b * (- c)
Indirect Triples
• These consist of a listing of pointers to triples; rather than a listing of the triples
themselves
• The triples consist of three fields: op, arg1, arg2
• The arg1 or arg2 could be pointers
© 2024 Computer Science Department HAFED Poly Kazaure Page 20 of 29
COM 413 Compiler Construction Adamu Isah
• At the beginning, users can change/rearrange the code or use better algorithms to
write the code.
• After generating intermediate code, the compiler can modify the intermediate
code by address calculations and improving loops.
• While producing the target machine code, the compiler can make use of memory
hierarchy and CPU registers.
Optimization can be categorized broadly into two types: machine independent and
machine dependent.
Machine-independent Optimization
In this optimization, the compiler takes in the intermediate code and transforms a part
of code that does not involve any CPU registers and/or absolute memory locations. For
example:
do {
item = 10;
value = value + item
} while (value<100);
This code involves repeated assignment of the identifier item, which if we put this
way:
Item =10;
do {
value = value + item;
}
while (value <100);
should not only save the CPU cycles, but can be used on any processor.
Machine-dependent Optimization
Machine-dependent optimization is done after the target code has been generated
and when the code is transformed according to the target machine architecture. It involves
CPU registers and may have absolute memory references rather than relative references.
Machine-dependent optimizers put efforts to take maximum advantage of memory
hierarchy.
Organization of the code optimizer:
The techniques used are a combination of Control-Flow and Data-Flow analysis as
shown in in the following figure:
• Control-Flow Analysis: Identifies loops in the flow graph of a program since such loops
are usually good candidates for improvement.
• Data-Flow Analysis: Collects information about the way variables are used in a
program.
Steps before optimization:
• Source program should be converted to Intermediate code
• Basic blocks construction
• Generating flow graph
• Apply optimization
Basic Blocks & Flow Graphs
Source codes generally have a number of instructions, which are always executed in
sequence and are considered as the basic blocks of the code. These basic blocks do not have
any jump statements among them, i.e., when the first instruction is executed, all the
instructions in the same basic block will be executed in their sequence of appearance without
losing the flow control of the program.
A program can have various constructs as basic blocks, like IF-THEN-ELSE, SWITCH-
CASE, conditional statements and loops such as DO-WHILE, FOR, and REPEAT-UNTIL, etc.
Characteristics of Basic block:
• They do not contain any kind of jump statements in them.
• There is no possibility of branching or getting halt in the middle.
• All the statements execute in the same order they appear.
• They do not lose the flow control of the program
Basic blocks play an important role in identifying variables, which are being used more
than once in a single basic block. If any variable is being used more than once, the register
memory allocated to that variable need not be emptied unless the block finishes execution.
• c=b+c
• d=b
b). Dead-code elimination: Suppose x is dead, that is, never subsequently used, at the point
where the statement x : = y + z appears in a basic block. Then this statement may be safely
removed without changing the value of the basic block.
c). Renaming temporary variables: A statement t: = b + c (t is a temporary) can be changed
to u: = b + c (u is a new temporary) and all uses of this instance of t can be changed to u
without changing the value of the basic block. Such a block is called a normal-form block.
d) Interchange of statements: Suppose a block has the following two adjacent statements:
t1: = b + c
t2: = x + y
We can interchange the two statements without affecting the value of the block if and only if
neither x nor y is t1 and neither b nor c is t2.
e). Algebraic transformations:
Algebraic transformations can be used to change the set of expressions computed by
a basic block into an algebraically equivalent set.
Examples:
• x: = x + 0 or x : = x * 1 can be eliminated from a basic block without changing the set
of expressions it computes.
• The exponential statement x: = y * * 2 can be replaced by x: = y * y.
• x/1 = x , x/2=x*0.5
Direct Acyclic Graph (DAG):
DAG is a tool that depicts the structure of basic blocks, helps to see the flow of values
flowing among the basic blocks, and offers optimization too. DAG provides easy
transformation on basic blocks. It can be understood here:
• Leaf nodes represent identifiers, names or constants.
• Interior nodes represent operators.
• Interior nodes also represent the results of expressions or the identifiers/name where
the values are to be stored or assigned.
CODE GENERATION
Code generation can be considered as the final phase of compilation. Through post
code generation, optimization process can be applied on the code, but that can be seen as a
part of code generation phase itself. It takes as input an intermediate representation of the
source program and produces as output an equivalent target program. The output code must
be correct and of high quality, meaning that it should make effective use of the resources of
the target machine. Moreover, the code generator itself should run efficiently.
While the details are dependent on the target language and the operating system,
issues such as memory management, instruction selection, register allocation, and evaluation
order are inherent in almost all code generation problems.
INPUT TO THE CODE GENERATOR
The input to the code generator consists of the intermediate representation of the
source program produced by the front end, together with information in the symbol table
that is used to determine the run time addresses of the data objects denoted by the names
in the intermediate representation.
It is assumed that prior to code generation the front end has scanned, parsed, and
translated the source program into a reasonably detailed intermediate representation, so the
values of names appearing in the intermediate language can be represented by quantities
that the target machine can directly manipulate (bits, integers, reals, pointers, etc.). it is also
assumed that the necessary type checking has taken place, so type conversion operators have
been inserted wherever necessary and obvious semantic errors (e.g., attempting to index an
array by a floating point number) have already been detected. The code generation phase can
therefore proceed on the assumption that its input is free of errors.
Target Program
The output of the code generator is the target program. The output may take on a
variety of forms: absolute machine language, relocatable machine language, or assembly
language. Producing an absolute machine language program as output has the advantage that
it can be placed in a location in memory and immediately executed. A small program can be
compiled and executed quickly.
Producing a relocatable machine language program as output allows subprograms to
be compiled separately. A set of relocatable object modules can be linked together and
loaded for execution by a linking loader. Although we must pay the added expense of linking
and loading if we produce relocatable object modules, we gain a great deal of flexibility in
being able to compile subroutines separately and to call other previously compiled programs
from an object module. If the target machine does not handle relocation automatically, the
compiler must provide explicit relocation information to the loader to link the separately
compiled program segments.
MOV R0, d
Here the fourth statement is redundant, and so is the third if “a” is not subsequently
used. The quality of the generated code is determined by its speed and size.
Instruction speeds are needed to design good code sequence but unfortunately,
accurate timing information is often difficult to obtain. Deciding which machine code
sequence is best for a given three address construct may also require knowledge about the
context in which that construct appears.
Register Allocation
Instructions involving register operands are usually shorter and faster than those
involving operands in memory. Therefore, efficient utilization of register is particularly
important in generating good code. The use of registers is often subdivided into two
problems:
• During register allocation, we select the set of variables that will reside in registers
at a point in the program.
• During a subsequent register assignment phase, we pick the specific register that
a variable will reside in.
Finding an optimal assignment of registers to variables is difficult, even with single
register values. Mathematically, the problem is NP-complete. The problem is further
complicated because the hardware and/or the operating system of the target machine may
require that certain register usage conventions be observed.
Choice of Evaluation Order
The order in which computations are performed can affect the efficiency of the target
code. Some computation orders require fewer registers to hold intermediate results than
others. Picking a best order is another difficult, NP-complete problem. Initially, we shall avoid
the problem by generating code for the three -address statements in the order in which they
have been produced by the intermediate code generator.