0% found this document useful (0 votes)
20 views8 pages

Name: Gapkwi S. Reuel REG NO: U21DLCS10193 Course: Cosc 408: A. What Is Analytic Grammar?

The document discusses various concepts in compiler construction and parsing techniques, including analytic grammar, compiler approaches for multiple languages, and the use of transition diagrams for lexical analysis. It also covers derivations in context-free grammars, left recursion removal, and differences between top-down and bottom-up parsing methods. Additionally, it addresses handle pruning and challenges in stack implementation of shift-reduce parsing.

Uploaded by

Gapscode Lyon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views8 pages

Name: Gapkwi S. Reuel REG NO: U21DLCS10193 Course: Cosc 408: A. What Is Analytic Grammar?

The document discusses various concepts in compiler construction and parsing techniques, including analytic grammar, compiler approaches for multiple languages, and the use of transition diagrams for lexical analysis. It also covers derivations in context-free grammars, left recursion removal, and differences between top-down and bottom-up parsing methods. Additionally, it addresses handle pruning and challenges in stack implementation of shift-reduce parsing.

Uploaded by

Gapscode Lyon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

NAME: GAPKWI S.

REUEL
REG NO: U21DLCS10193
COURSE: COSC 408

Q1. a. What is Analytic Grammar?

Analytic grammar is a type of grammar used to describe the structure of a language based on
analysis rather than synthesis. It focuses on parsing and understanding existing sentences
rather than generating new ones. In computational linguistics, analytic grammars are used in
parsing algorithms to break down sentences into their syntactic components without relying
on predefined production rules, as in generative grammar.

1b. Compiler Construction for Multiple Languages and Architectures

(I) Naïve (Non-Portable) Approach

If you build a separate compiler for each (language, architecture) pair, then we will need:

n×m
Compilers, where:

 n is the number of programming languages.


 m is the number of target architectures.
For example, if there are 5 programming languages and 4 architectures, we need:

5×4=20

Compilers, which is inefficient.

(ii) Portable Approach

If compilers are portable, you can separate compilation into two independent phases:

1. Front-end (Language-Specific): Each programming language compiles into a


common intermediate representation (IR) instead of directly generating machine
code.
2. Back-end (Architecture-Specific): The IR is then translated into machine code for
different architectures.

Thus, instead of needing n × m compilers, we only need:

n+m
Where:

 n front-end compilers convert each language to IR.


 m back-end compilers convert the IR to machine code for each architecture.
For example, with 5 languages and 4 architectures, you need:

5+4=9
Compilers instead of 20, greatly reducing complexity.

How is This Achieved?

This approach is commonly implemented using intermediate representations (IRs) such as:

 LLVM IR (used by Clang, Rust, Swift, etc.)


 Java Bytecode (used by JVM-based languages)
 WebAssembly (WASM) (used to compile languages for web browsers)

By compiling all languages to a unified IR, and then translating the IR to machine code for
different architectures, we avoid writing separate compilers for every combination. This
makes compilers easier to develop, maintain, and extend.

Q2. 2a. Using Transition Diagrams to Handle Keywords

A transition diagram is a finite state machine (FSM) representation used for lexical analysis.
It can be used to recognize keywords in two ways:

(i) Direct Transition Approach

 Each keyword is explicitly represented by a sequence of states.


 The transition follows the exact characters of the keyword.
 If a final state is reached, the keyword is recognized.
Example: Recognizing "if"
(Start) → 'i' → 'f' → (Final State: Recognized)
Limitation: Requires separate diagrams for each keyword, leading to redundancy.

(ii) Identifier Check with Keyword Table

 The transition diagram processes general identifiers (e.g., if, else, while, sum, etc.).
 After recognizing a sequence of letters, it checks a reserved keyword table to
determine if it’s a keyword.
Example: Recognizing "if" in if (x > 0)

(Start) → 'i' → 'f' → (Lookup Table) → (Keyword Recognized)


Advantage: A single transition diagram can handle both identifiers and keywords efficiently.

2b. Leftmost and Rightmost Derivation for (id * id) + id


Given the CFG:
E→E+T | T

T→T∗F | F

F→ (E) | id

Leftmost Derivation (Expanding the leftmost non-terminal first)


E⇒E+T

E⇒T+T

T⇒T∗F+T

T⇒F∗F+T

F⇒id∗F+T

F⇒id∗id+T

T⇒F

F⇒id

Result: id * id + id

Rightmost Derivation (Expanding the rightmost non-terminal first)


E⇒T+T

T+T⇒T∗F+T

T∗F+T⇒F∗F+T
F∗ F+ T⇒ id∗ F+T

id∗F+T⇒id∗id+T

id∗id+T⇒id∗id+F

id∗id+F⇒id∗id+id

Result: id * id + id

2c. Removing Left Recursion from the Given Grammar


Given grammar:
Z→ Xa | b
X→ Xc | Zd | ϵ

Step 1: Identify Left Recursion


In X → Xc | Zd | ε, the left recursion occurs in X → Xc.

Step 2: Rewrite to Remove Left Recursion


Introduce a new non-terminal X′X':
X→ZdX′ | ϵ
X′→cX′ | ϵ

Final Grammar (Without Left Recursion)


Z→Xa | b
X→ZdX′ | ϵ
X′→cX′ | ϵ

This ensures that no production rule has left recursion, making it suitable for top-down
parsing.

Q3. 3a. Closure of a Language Class under an Operation

When we say that a class of languages is closed under a particular operation, it means that if
we apply that operation to languages within the class, the resulting language also belongs to
the same class.

3b. Back patching


Back patching is a technique used in syntax-directed translation (SDT) and code generation
to handle forward jumps (e.g., in loops and conditionals).
3c. Use of Symbols in Lex Regular Expressions

SYMBOL USAGE IN LEX REGULAR EXAMPLE


EXPRESSIONS
\ Escape character to treat special \. Matches a literal.
characters as literals.
. Matches any single character a.b matches acb, adb, etc.
except a newline.
^ Matches the beginning of a line. ^abc matches abc only if it appears at the
start of a line.
$ Matches the end of a line. xyz$ matches xyz only if it appears at the
end of a line.
\n Matches a newline character. "Hello\nWorld" matches Hello followed
by a new line and then World.
\t Matches a tab character. "\tHello" matches a tab followed by
Hello.

Q4. 4a. Difference between Bottom-Up Parsing and Top-Down Parsing

Feature Top-Down Parsing Bottom-Up Parsing


Definition Constructs the parse tree from the Constructs the parse tree from the
start symbol, expanding rules until input, reducing it step by step to
the input string is matched. the start symbol.
Working Starts from the root (start symbol) Starts from the input string and
Principle and applies derivations. applies reductions.
Example Recursive Descent Parsing (e.g., Shift-Reduce Parsing (e.g., LR(1),
LL(1) parser) SLR, LALR parsers)
Backtracking May require backtracking unless No backtracking, uses a stack-
predictive parsing is used. based approach.
Use Case Used in hand-written parsers (e.g., Used in automated parsers (e.g.,
for small grammars). compiler design).
a. Example of Top-Down Parsing (Recursive Descent)
Consider the grammar:
E→T+E
E→T
T→ id
For input: "id + id", the parser tries to expand E first, then match the input step by step.

b. Example of Bottom-Up Parsing (Shift-Reduce)


For the same grammar, Shift-Reduce Parsing works by shifting input symbols onto a stack
and then reducing when a rule matches.
For input "id + id", the steps are:
Shift id
Reduce id → T
Shift +
Shift id
Reduce id → T
Reduce T + T → E

4b. Handle of a Right Sentential Form γ


A handle in bottom-up parsing is:

 A substring of a right sentential form γ


 A valid right-hand side of a production rule.
 The leftmost substring that can be reduced to produce a previous sentential form.

🔹 Example
Consider the grammar:
E→E+T

T→T∗F

F→ id
For right sentential form "id + id * id":
The handle is "id" (which can be reduced to F first).
Handles are key to efficient parsing, as they guide reductions in shift-reduce parsing.

4c. Parsing "the girl sings passionately" Using Shift-Reduce Parsing


Grammar Rules (Simplified)
S→NP VP AD
NP→A N
A→ "the"
N→ “girl", "boy", etc.
VP→V
V→ "sings", "jumps", etc.
AD→ "passionately", "well", etc.

Shift-Reduce Steps

Stack Input Action


[] the girl sings Shift "the"
passionately
["the"] girl sings passionately Shift "girl"
["the", "girl"] sings passionately Reduce ["the", "girl"] → NP
["NP"] sings passionately Shift "sings"
["NP", "sings"] passionately Reduce ["sings"] → VP
["NP", "VP"] passionately Shift "passionately"
["NP", "VP", (empty) Reduce ["passionately"] → AD
"passionately"]
["NP", "VP", "AD"] (empty) Reduce ["NP", "VP", "AD"] →
S
Final Parse Tree:
S
/ | \
NP VP AD
| | |
The sings passionately girl

Q5. a. Handle Pruning


Handle pruning is the process of:

1. Identifying the handle in a right sentential form.


2. Reducing it to the corresponding non-terminal.
3. Continuing until the start symbol is reached.

5b. Problems in Stack Implementation of Shift-Reduce Parsing Using Handle Pruning

1. Conflicts in Parsing Tables


a. Shift-Reduce Conflict: When the parser cannot decide between shifting or
reducing.
b. Reduce-Reduce Conflict: When multiple reductions are possible at the same
step.
2. Stack Overflow & Memory Management
a. Large parse trees require efficient memory handling.
b. Without proper management, deeply nested expressions cause stack overflow.

You might also like