Modified 2024 - 2025
Modified 2024 - 2025
Compiler Design is the process of creating software tools called compilers that translate human-readable code
written in high-level programming languages, such as C++ or Java, into machine-readable code understood by
computers, like assembly language or machine code. The goal of compiler design is to automate this
translation process, making it more efficient and accurate. Compilers analyze the structure and syntax of the
source code, perform various optimizations, and generate executable programs that can be run on computers.
We use compilers to convert human-readable code into machine-readable code so that computers can
understand and execute it. Compilers streamline the translation process, making it faster and more efficient.
They also enable programmers to write code in high-level languages, which are easier to understand and
maintain. Additionally, compilers optimize the generated code for improved performance and portability
across different computer systems.
The high-level language is converted into binary language in various phases. A compiler is a program that
converts high-level language to assembly language. Similarly, an assembler is a program that converts the
assembly language to machine-level language.
Let us first understand how a program, using C compiler, is executed on a host machine.
User writes a program in C language (high-level language).
The C compiler compiles the program and translates it to assembly program (low-level language).
An assembler then translates the assembly program into machine code (object).
A linker tool is used to link all the parts of the program together for execution (executable machine
code).
A loader loads all of them into memory and then the program is executed.
Before diving straight into the concepts of compilers, we should understand a few other tools that work closely
with compilers.
Preprocessor
A preprocessor, generally considered as a part of compiler, is a tool that produces input for compilers. It deals
with macro-processing, augmentation, file inclusion, language extension, etc.
Interpreter
An interpreter, like a compiler, translates high-level language into low-level machine language. The difference
lies in the way they read the source code or input. A compiler reads the whole source code at once, creates
tokens, checks semantics, generates intermediate code, executes the whole program and may involve many
passes. In contrast, an interpreter reads a statement from the input, converts it to an intermediate code,
executes it, then takes the next statement in sequence. If an error occurs, an interpreter stops execution and
reports it. whereas a compiler reads the whole program even if it encounters several errors.
Assembler
An assembler translates assembly language programs into machine code.The output of an assembler is called
an object file, which contains a combination of machine instructions as well as the data required to place these
instructions in memory.
Linker
Linker is a computer program that links and merges various object files together in order to make an
executable file. All these files might have been compiled by separate assemblers. The major task of a linker is
to search and locate referenced module/routines in a program and to determine the memory location where
these codes will be loaded, making the program instruction to have absolute references.
Loader
Loader is a part of operating system and is responsible for loading executable files into memory and execute
them. It calculates the size of a program (instructions and data) and creates memory space for it. It initializes
various registers to initiate execution.
Cross-compiler
A compiler that runs on platform (A) and is capable of generating executable code for platform (B) is called a
cross-compiler.
Source-to-source Compiler.
A compiler that takes the source code of one programming language and translates it into the source code of
another programming language is called a source-to-source compiler.
A compiler can broadly be divided into two phases based on the way they compile.
Analysis Phase
Known as the front-end of the compiler, the analysis phase of the compiler reads the source program, divides
it into core parts and then checks for lexical, grammar and syntax errors.The analysis phase generates an
intermediate representation of the source program and symbol table, which should be fed to the Synthesis
phase as input.
Synthesis Phase
Known as the back-end of the compiler, the synthesis phase generates the target program with the help of
intermediate source code representation and symbol table.
A compiler can have many phases and passes.
Pass : A pass refers to the traversal of a compiler through the entire program.
Phase : A phase of a compiler is a distinguishable stage, which takes input from the previous stage,
processes and yields output that can be used as input for the next stage. A pass can have more than one
phase.
Compiler Design - Phases of Compiler
The compilation process is a sequence of various phases. Each phase takes input from its previous stage, has
its own representation of source program, and feeds its output to the next phase of the compiler. Let us
understand the phases of a compiler.
Lexical Analysis
The first phase of scanner works as a text scanner. This phase scans the source code as a stream of characters
and converts it into meaningful lexemes. Lexical analyzer represents these lexemes in the form of tokens as:
<token-name, attribute-value>
Syntax Analysis
The next phase is called the syntax analysis or parsing. It takes the token produced by lexical analysis as input
and generates a parse tree (or syntax tree). In this phase, token arrangements are checked against the source
code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct.
Semantic Analysis
Semantic analysis checks whether the parse tree constructed follows the rules of language. For example,
assignment of values is between compatible data types, and adding string to an integer. Also, the semantic
analyzer keeps track of identifiers, their types and expressions; whether identifiers are declared before use or
not etc. The semantic analyzer produces an annotated syntax tree as an output.
Intermediate Code Generation
After semantic analysis the compiler generates an intermediate code of the source code for the target machine.
It represents a program for some abstract machine. It is in between the high-level language and the machine
language. This intermediate code should be generated in such a way that it makes it easier to be translated into
the target machine code.
Code Optimization
The next phase does code optimization of the intermediate code. Optimization can be assumed as something
that removes unnecessary code lines, and arranges the sequence of statements in order to speed up the program
execution without wasting resources (CPU, memory).
Code Generation
In this phase, the code generator takes the optimized representation of the intermediate code and maps it to the
target machine language. The code generator translates the intermediate code into a sequence of (generally) re-
locatable machine code. Sequence of instructions of machine code performs the task as the intermediate code
would do.
Symbol Table
It is a data-structure maintained throughout all the phases of a compiler. All the identifier's names along with
their types are stored here. The symbol table makes it easier for the compiler to quickly search the identifier
record and retrieve it. The symbol table is also used for scope management.
Compiler Design - Lexical Analysis
Lexical analysis is the first phase of a compiler. It takes modified source code from language preprocessors
that are written in the form of sentences. The lexical analyzer breaks these syntaxes into a series of tokens, by
removing any whitespace or comments in the source code.
If the lexical analyzer finds a token invalid, it generates an error. The lexical analyzer works closely with the
syntax analyzer. It reads character streams from the source code, checks for legal tokens, and passes the data
to the syntax analyzer when it demands.
Tokens
Lexemes are said to be a sequence of characters (alphanumeric) in a token. There are some predefined rules
for every lexeme to be identified as a valid token. These rules are defined by grammar rules, by means of a
pattern. A pattern explains what can be a token, and these patterns are defined by means of regular
expressions.
In programming language, keywords, constants, identifiers, strings, numbers, operators and punctuations
symbols can be considered as tokens.
For example, in C language, the variable declaration line
int value = 100;
contains the tokens:
int (keyword), value (identifier), = (operator), 100 (constant) and ; (symbol).
Specifications of Tokens
Let us understand how the language theory undertakes the following terms:
Alphabets
Any finite set of symbols {0,1} is a set of binary alphabets, {0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F} is a set of
Hexadecimal alphabets, {a-z, A-Z} is a set of English language alphabets.
Strings
Any finite sequence of alphabets (characters) is called a string. Length of the string is the total number of
occurrence of alphabets, e.g., the length of the string tutorialspoint is 14 and is denoted by |tutorialspoint| = 14.
A string having no alphabets, i.e. a string of zero length is known as an empty string and is denoted by ε
(epsilon).
Special symbols
A typical high-level language contains the following symbols:-
Arithmetic Addition(+), Subtraction(-), Modulo(%),
Symbols Multiplication(*), Division(/)
Punctuation Comma(,), Semicolon(;), Dot(.), Arrow(->)
Assignment =
Preprocessor #
Language
A language is considered as a finite set of strings over some finite set of alphabets. Computer languages are
considered as finite sets, and mathematically set operations can be performed on them. Finite languages can be
described by means of regular expressions.
Explore our latest online courses and learn new skills at your own pace. Enroll and become a certified expert
to boost your career.
Notations
If r and s are regular expressions denoting the languages L(r) and L(s), then
Union : (r)|(s) is a regular expression denoting L(r) U L(s)
Concatenation : (r)(s) is a regular expression denoting L(r)L(s)
Kleene closure : (r)* is a regular expression denoting (L(r))*
(r) is a regular expression denoting L(r)
Explore our latest online courses and learn new skills at your own pace. Enroll and become a certified expert
to boost your career.
The only problem left with the lexical analyzer is how to verify the validity of a regular expression used in
specifying the patterns of keywords of a language. A well-accepted solution is to use finite automata for
verification.
Compiler Design - Finite Automata
Finite automata is a state machine that takes a string of symbols as input and changes its state accordingly.
Finite automata is a recognizer for regular expressions. When a regular expression string is fed into finite
automata, it changes its state for each literal. If the input string is successfully processed and the automata
reaches its final state, it is accepted, i.e., the string just fed was said to be a valid token of the language in
hand.
The mathematical model of finite automata consists of:
Finite set of states (Q)
Finite set of input symbols (Σ)
One Start state (q0)
Set of final states (qf)
Transition function (δ)
The transition function (δ) maps the finite set of state (Q) to a finite set of input symbols (Σ), Q × Σ ➔ Q
Finite Automata Construction
Let L(r) be a regular language recognized by some finite automata (FA).
States : States of FA are represented by circles. State names are written inside circles.
Start state : The state from where the automata starts, is known as the start state. Start state has an
arrow pointed towards it.
Intermediate states : All intermediate states have at least two arrows; one pointing to and another
pointing out from them.
Final state : If the input string is successfully parsed, the automata is expected to be in this state. Final
state is represented by double circles. It may have any odd number of arrows pointing to it and even
number of arrows pointing out from it. The number of odd arrows are one greater than even, i.e. odd =
even+1.
Transition : The transition from one state to another state happens when a desired symbol in the input
is found. Upon transition, automata can either move to the next state or stay in the same state.
Movement from one state to another is shown as a directed arrow, where the arrows points to the
destination state. If automata stays on the same state, an arrow pointing from a state to itself is drawn.
Example : We assume FA accepts any three digit binary value ending in digit 1. FA = {Q(q 0, qf), Σ(0,1), q0, qf,
δ}
Compiler Design - Syntax Analysis
Syntax analysis or parsing is the second phase of a compiler. In this chapter, we shall learn the basic concepts
used in the construction of a parser.
We have seen that a lexical analyzer can identify tokens with the help of regular expressions and pattern rules.
But a lexical analyzer cannot check the syntax of a given sentence due to the limitations of the regular
expressions. Regular expressions cannot check balancing tokens, such as parenthesis. Therefore, this phase
uses context-free grammar (CFG), which is recognized by push-down automata.
CFG, on the other hand, is a superset of Regular Grammar, as depicted below:
It implies that every Regular Grammar is also context-free, but there exists some problems, which are beyond
the scope of Regular Grammar. CFG is a helpful tool in describing the syntax of programming languages.
Context-Free Grammar
In this section, we will first see the definition of context-free grammar and introduce terminologies used in
parsing technology.
A context-free grammar has four components:
A set of non-terminals (V). Non-terminals are syntactic variables that denote sets of strings. The non-
terminals define sets of strings that help define the language generated by the grammar.
A set of tokens, known as terminal symbols (Σ). Terminals are the basic symbols from which strings
are formed.
A set of productions (P). The productions of a grammar specify the manner in which the terminals
and non-terminals can be combined to form strings. Each production consists of a non-
terminal called the left side of the production, an arrow, and a sequence of tokens and/or on-
terminals, called the right side of the production.
One of the non-terminals is designated as the start symbol (S); from where the production begins.
The strings are derived from the start symbol by repeatedly replacing a non-terminal (initially the start symbol)
by the right side of a production, for that non-terminal.
Example
We take the problem of palindrome language, which cannot be described by means of Regular Expression.
That is, L = { w | w = wR } is not a regular language. But it can be described by means of CFG, as illustrated
below:
G = ( V, Σ, P, S )
Where:
V = { Q, Z, N }
P = { Q → Z | Q → N | Q → ℇ | Z → 0Q0 | N → 1Q1 }
Σ = { 0, 1 }
S={Q}
This grammar describes palindrome language, such as: 1001, 11100111, 00100, 1010101, 11111, etc.
This way, the parser accomplishes two tasks, i.e., parsing the code, looking for errors and generating a parse
tree as the output of the phase.
Parsers are expected to parse the whole code even if some errors exist in the program. Parsers use error
recovering strategies, which we will learn later in this chapter.
Explore our latest online courses and learn new skills at your own pace. Enroll and become a certified expert
to boost your career.
Derivation
A derivation is basically a sequence of production rules, in order to get the input string. During parsing, we
take two decisions for some sentential form of input:
Deciding the non-terminal which is to be replaced.
Deciding the production rule, by which, the non-terminal will be replaced.
To decide which non-terminal to be replaced with production rule, we can have two options.
Left-most Derivation
If the sentential form of an input is scanned and replaced from left to right, it is called left-most derivation.
The sentential form derived by the left-most derivation is called the left-sentential form.
Right-most Derivation
If we scan and replace the input with production rules, from right to left, it is known as right-most derivation.
The sentential form derived from the right-most derivation is called the right-sentential form.
Example
Production rules:
E→E+E
E→E*E
E → id
Input string: id + id * id
The left-most derivation is:
E→E*E
E→E+E*E
E → id + E * E
E → id + id * E
E → id + id * id
Notice that the left-most side non-terminal is always processed first.
The right-most derivation is:
E→E+E
E→E+E*E
E → E + E * id
E → E + id * id
E → id + id * id
Parse Tree
A parse tree is a graphical depiction of a derivation. It is convenient to see how strings are derived from the
start symbol. The start symbol of the derivation becomes the root of the parse tree. Let us see this by an
example from the last topic.
We take the left-most derivation of a + b * c
The left-most derivation is:
E→E*E
E→E+E*E
E → id + E * E
E → id + id * E
E → id + id * id
Step 1:
E→E*E
Step 2:
E→E+E*E
Step 3:
E → id + E * E
Step 4:
E → id + id * E
Step 5:
E → id + id * id
In a parse tree:
All leaf nodes are terminals.
All interior nodes are non-terminals.
In-order traversal gives original input string.
A parse tree depicts associativity and precedence of operators. The deepest sub-tree is traversed first, therefore
the operator in that sub-tree gets precedence over the operator which is in the parent nodes.
Ambiguity
A grammar G is said to be ambiguous if it has more than one parse tree (left or right derivation) for at least
one string.
Example
E→E+E
E→E–E
E → id
For the string id + id – id, the above grammar generates two parse trees:
Associativity
If an operand has operators on both sides, the side on which the operator takes this operand is decided by the
associativity of those operators. If the operation is left-associative, then the operand will be taken by the left
operator or if the operation is right-associative, the right operator will take the operand.
Example
Operations such as Addition, Multiplication, Subtraction, and Division are left associative. If the expression
contains:
id op id op id
it will be evaluated as:
(id op id) op id
For example, (id + id) + id
Operations like Exponentiation are right associative, i.e., the order of evaluation in the same expression will
be:
id op (id op id)
For example, id ^ (id ^ id)
Precedence
If two different operators share a common operand, the precedence of operators decides which will take the
operand. That is, 2+3*4 can have two different parse trees, one corresponding to (2+3)*4 and another
corresponding to 2+(3*4). By setting precedence among operators, this problem can be easily removed. As in
the previous example, mathematically * (multiplication) has precedence over + (addition), so the expression
2+3*4 will always be interpreted as:
2 + (3 * 4)
These methods decrease the chances of ambiguity in a language or its grammar.
Left Recursion
A grammar becomes left-recursive if it has any non-terminal ‘A’ whose derivation contains ‘A’ itself as the
left-most symbol. Left-recursive grammar is considered to be a problematic situation for top-down parsers.
Top-down parsers start parsing from the Start symbol, which in itself is non-terminal. So, when the parser
encounters the same non-terminal in its derivation, it becomes hard for it to judge when to stop parsing the left
non-terminal and it goes into an infinite loop.
Example:
(1) A => Aα | β
(2) S => Aα | β
A => Sd
(1) is an example of immediate left recursion, where A is any non-terminal symbol and α represents a string of
non-terminals.
(2) is an example of indirect-left recursion.
A top-down parser will first parse the A, which in-turn will yield a string consisting of A itself and the parser
may go into a loop forever.
Removal of Left Recursion
One way to remove left recursion is to use the following technique:
The production
A => Aα | β
is converted into following productions
A => βA'
A'=> αA' | ε
This does not impact the strings derived from the grammar, but it removes immediate left recursion.
Second method is to use the following algorithm, which should eliminate all direct and indirect left recursions.
START
A'=> β | 𝜸 | …
A => αA'
Now the parser has only one production per prefix which makes it easier to take decisions.
First and Follow Sets
An important part of parser table construction is to create first and follow sets. These sets can provide the
actual position of any terminal in the derivation. This is done to create the parsing table where the decision of
replacing T[A, t] = α with some production rule.
First Set
This set is created to know what terminal symbol is derived in the first position by a non-terminal. For
example,
α→tβ
That is α derives t (terminal) in the very first position. So, t ∈ FIRST(α).
Algorithm for calculating First set
Look at the definition of FIRST(α) set:
Top-down Parsing
When the parser starts constructing the parse tree from the start symbol and then tries to transform the start
symbol to the input, it is called top-down parsing.
Recursive descent parsing: It is a common form of top-down parsing. It is called recursive as it uses
recursive procedures to process the input. Recursive descent parsing suffers from backtracking.
Backtracking: It means, if one derivation of a production fails, the syntax analyzer restarts the
process using different rules of same production. This technique may process the input string more
than once to determine the right production.
Bottom-up Parsing
As the name suggests, bottom-up parsing starts with the input symbols and tries to construct the parse tree up
to the start symbol.
Example:
Input string : a + b * c
Production rules:
S→E
E→E+T
E→E*T
E→T
T → id
Let us start bottom-up parsing
a+b*c
Read the input and check if any production matches with the input:
a+b*c
T+b*c
E+b*c
E+T*c
E*c
E*T
E
S
Predictive Parser
Predictive parser is a recursive descent parser, which has the capability to predict which production is to be
used to replace the input string. The predictive parser does not suffer from backtracking.
To accomplish its tasks, the predictive parser uses a look-ahead pointer, which points to the next input
symbols. To make the parser back-tracking free, the predictive parser puts some constraints on the grammar
and accepts only a class of grammar known as LL(k) grammar.
Predictive parsing uses a stack and a parsing table to parse the input and generate a parse tree. Both the stack
and the input contains an end symbol $ to denote that the stack is empty and the input is consumed. The parser
refers to the parsing table to take any decision on the input and stack element combination.
In recursive descent parsing, the parser may have more than one production to choose from for a single
instance of input, whereas in predictive parser, each step has at most one production to choose. There might be
instances where there is no production matching the input string, making the parsing procedure to fail.
LL Parser
An LL Parser accepts LL grammar. LL grammar is a subset of context-free grammar but with some
restrictions to get the simplified version, in order to achieve easy implementation. LL grammar can be
implemented by means of both algorithms namely, recursive-descent or table-driven.
LL parser is denoted as LL(k). The first L in LL(k) is parsing the input from left to right, the second L in
LL(k) stands for left-most derivation and k itself represents the number of look aheads. Generally k = 1, so
LL(k) may also be written as LL(1).
LL Parsing Algorithm
We may stick to deterministic LL(1) for parser explanation, as the size of table grows exponentially with the
value of k. Secondly, if a given grammar is not LL(1), then usually, it is not LL(k), for any given k.
Given below is an algorithm for LL(1) Parsing:
Input:
string ω
parsing table M for grammar G
Output:
If ω is in L(G) then left-most derivation of ω,
error otherwise.
repeat
let X be the top stack symbol and a the symbol pointed by ip.
if X∈ Vt or $
if X = a
POP X and advance ip.
else
error()
endif
else /* X is non-terminal */
if M[X,a] = X → Y1, Y2,... Yk
POP X
PUSH Yk, Yk-1,... Y1 /* Y1 on top */
Output the production X → Y1, Y2,... Yk
else
error()
endif
endif
until X = $ /* empty stack */
A grammar G is LL(1) if A → α | β are two distinct productions of G:
for no terminal, both α and β derive strings beginning with a.
at most one of α and β can derive empty string.
if β → t, then α does not derive any string beginning with a terminal in FOLLOW(A).
Shift-Reduce Parsing
Shift-reduce parsing uses two unique steps for bottom-up parsing. These steps are known as shift-step and
reduce-step.
Shift step: The shift step refers to the advancement of the input pointer to the next input symbol,
which is called the shifted symbol. This symbol is pushed onto the stack. The shifted symbol is treated
as a single node of the parse tree.
Reduce step : When the parser finds a complete grammar rule (RHS) and replaces it to (LHS), it is
known as reduce-step. This occurs when the top of the stack contains a handle. To reduce, a POP
function is performed on the stack which pops off the handle and replaces it with LHS non-terminal
symbol.
LR Parser
The LR parser is a non-recursive, shift-reduce, bottom-up parser. It uses a wide class of context-free grammar
which makes it the most efficient syntax analysis technique. LR parsers are also known as LR(k) parsers,
where L stands for left-to-right scanning of the input stream; R stands for the construction of right-most
derivation in reverse, and k denotes the number of lookahead symbols to make decisions.
There are three widely used algorithms available for constructing an LR parser:
SLR(1) – Simple LR Parser:
o Works on smallest class of grammar
o Few number of states, hence very small table
o Simple and fast construction
LR(1) – LR Parser:
o Works on complete set of LR(1) Grammar
o Generates large table and large number of states
o Slow construction
LALR(1) – Look-Ahead LR Parser:
o Works on intermediate size of grammar
o Number of states are same as in SLR(1)
LR Parsing Algorithm
Here we describe a skeleton algorithm of an LR parser:
token = next_token()
repeat forever
s = top of stack
else
error()
LL vs. LR
LL LR
Starts with the root nonterminal on the stack. Ends with the root nonterminal on the stack.
Uses the stack for designating what is still to be Uses the stack for designating what is already seen.
expected.
Builds the parse tree top-down. Builds the parse tree bottom-up.
Continuously pops a nonterminal off the stack, and Tries to recognize a right hand side on the stack, pops it,
pushes the corresponding right hand side. and pushes the corresponding nonterminal.
Reads the terminals when it pops one off the stack. Reads the terminals while it pushes them on the stack.
Pre-order traversal of the parse tree. Post-order traversal of the parse tree.
If watched closely, we find most of the leaf nodes are single child to their parent nodes. This information can
be eliminated before feeding it to the next phase. By hiding extra information, we can obtain a tree as shown
below:
ASTs are important data structures in a compiler with least unnecessary information. ASTs are more compact
than a parse tree and can be easily used by a compiler.
Compiler Design - Semantic Analysis
e have learnt how a parser constructs parse trees in the syntax analysis phase. The plain parse-tree constructed
in that phase is generally of no use for a compiler, as it does not carry any information of how to evaluate the
tree. The productions of context-free grammar, which makes the rules of the language, do not accommodate
how to interpret them.
For example
E→E+T
The above CFG production has no semantic rule associated with it, and it cannot help in making any sense of
the production.
Semantics
Semantics of a language provide meaning to its constructs, like tokens and syntax structure. Semantics help
interpret symbols, their types, and their relations with each other. Semantic analysis judges whether the syntax
structure constructed in the source program derives any meaning or not.
CFG + semantic rules = Syntax Directed Definitions
For example:
int a = “value”;
should not issue an error in lexical and syntax analysis phase, as it is lexically and structurally correct, but it
should generate a semantic error as the type of the assignment differs. These rules are set by the grammar of
the language and evaluated in semantic analysis. The following tasks should be performed in semantic
analysis:
Scope resolution
Type checking
Array-bound checking
Semantic Errors
We have mentioned some of the semantics errors that the semantic analyzer is expected to recognize:
Type mismatch
Undeclared variable
Reserved identifier misuse.
Multiple declaration of variable in a scope.
Accessing an out of scope variable.
Actual and formal parameter mismatch.
Explore our latest online courses and learn new skills at your own pace. Enroll and become a certified expert
to boost your career.
Attribute Grammar
Attribute grammar is a special form of context-free grammar where some additional information (attributes)
are appended to one or more of its non-terminals in order to provide context-sensitive information. Each
attribute has well-defined domain of values, such as integer, float, character, string, and expressions.
Attribute grammar is a medium to provide semantics to the context-free grammar and it can help specify the
syntax and semantics of a programming language. Attribute grammar (when viewed as a parse-tree) can pass
values or information among the nodes of a tree.
Example:
E → E + T { E.value = E.value + T.value }
The right part of the CFG contains the semantic rules that specify how the grammar should be interpreted.
Here, the values of non-terminals E and T are added together and the result is copied to the non-terminal E.
Semantic attributes may be assigned to their values from their domain at the time of parsing and evaluated at
the time of assignment or conditions. Based on the way the attributes get their values, they can be broadly
divided into two categories : synthesized attributes and inherited attributes.
Synthesized attributes
These attributes get values from the attribute values of their child nodes. To illustrate, assume the following
production:
S → ABC
If S is taking values from its child nodes (A,B,C), then it is said to be a synthesized attribute, as the values of
ABC are synthesized to S.
As in our previous example (E → E + T), the parent node E gets its value from its child node. Synthesized
attributes never take values from their parent nodes or any sibling nodes.
Inherited attributes
In contrast to synthesized attributes, inherited attributes can take values from parent and/or siblings. As in the
following production,
S → ABC
A can get values from S, B and C. B can take values from S, A, and C. Likewise, C can take values from S, A,
and B.
Expansion : When a non-terminal is expanded to terminals as per a grammatical rule
Reduction : When a terminal is reduced to its corresponding non-terminal according to grammar rules. Syntax
trees are parsed top-down and left to right. Whenever reduction occurs, we apply its corresponding semantic
rules (actions).
Semantic analysis uses Syntax Directed Translations to perform the above tasks.
Semantic analyzer receives AST (Abstract Syntax Tree) from its previous stage (syntax analysis).
Semantic analyzer attaches attribute information with AST, which are called Attributed AST.
Attributes are two tuple value, <attribute name, attribute value>
For example:
int value = 5;
<type, “integer”>
<presentvalue, “5”>