0% found this document useful (0 votes)
9 views

Assignment2 Solution Compiler Design

The document is an assignment on Compiler Design consisting of 13 questions, primarily focused on regular expressions, finite automata, and their properties. It includes detailed solutions and explanations for each question, covering topics such as tokenization, lexical classes, and the conversion between nondeterministic and deterministic automata. Key concepts discussed include the limitations of regular languages and the use of methods like Thompson Construction for creating NFAs from regular expressions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Assignment2 Solution Compiler Design

The document is an assignment on Compiler Design consisting of 13 questions, primarily focused on regular expressions, finite automata, and their properties. It includes detailed solutions and explanations for each question, covering topics such as tokenization, lexical classes, and the conversion between nondeterministic and deterministic automata. Key concepts discussed include the limitations of regular languages and the use of methods like Thompson Construction for creating NFAs from regular expressions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Compiler Design

Assignment 2

No. of Question 13

Q1.

Ans: C) Constituent strings of a language

Detailed Solution: Regular expressions are used to define patterns that match strings in a language.
They describe the set of strings (constituent strings) that belong to a specific language. For example,
the regular expression a*b represents strings in the form of zero or more occurrences of a followed by
a single b, such as b, ab, aab, etc.

Q2.

Ans : c)
Explanation:
Lexeme
Token category
Sum “Identifier”
= “Assignment operator”
3 “Integer literal”
+ “Addition operator”
2 “Integer literal”
; “End of statement”

Q3.
Ans: C)

Detailed Solution: In Fortran, the statement DO 5 I = 1.25 is ambiguous without proper


tokenization because DO 5 I could initially be interpreted as part of a DO loop construct. However,
upon encountering the . (in 1.25), the tokenizer realizes that this is not a DO loop but rather an
assignment statement (DO5I = 1.25) where DO5I is an identifier.

Q4.

Ans : d)

Explanation: Different Lexical Classes or Tokens or Lexemes Identifiers, Constants,


Keywords, Operators.
Q5.

Ans: d)

Deatiled Solution: Regular expressions are used to define patterns for regular languages.
However, the language that accepts strings with exactly one more 1 than 0s is not a regular
language. This is because keeping track of the difference between the number of 1s and 0s
requires a form of counting or memory, which regular expressions (and finite automata) cannot
handle.

This type of language can only be recognized by more powerful computational models, such as a
pushdown automaton (for context-free languages) or a Turing machine. Hence, it is not
possible to write a regular expression for this condition.

Q6.

Ans : C)

Explanation: The regular expression has two 0′s surrounded by (0+1)* which means
accepted strings must have at least 2 0′s.

1. Finite automata is an implementation of


a) Regular expression
b) Any grammar
c) Part of the regular expression
d) None of the other options
Ans: A)

Deatiled Solution: Finite automata (both deterministic and non-deterministic) are mathematical
models used to recognize or implement regular languages, which are the types of languages
described by regular expressions.

• Regular expressions provide a way to describe patterns in strings, while finite automata
serve as the computational model that accepts or rejects strings based on those patterns.
• There is a direct correspondence between regular expressions and finite automata: for
every regular expression, there exists an equivalent finite automaton, and vice versa.
Q7.

Ans: A)

Deatiled Solution: Finite automata (both deterministic and non-deterministic) are mathematical
models used to recognize or implement regular languages, which are the types of languages
described by regular expressions.

• Regular expressions provide a way to describe patterns in strings, while finite automata
serve as the computational model that accepts or rejects strings based on those patterns.
• There is a direct correspondence between regular expressions and finite automata: for
every regular expression, there exists an equivalent finite automaton, and vice versa.

Q8.

Ans: A)

Detailed Solution: In a Nondeterministic Finite Automaton (NFA), it is possible to transition


from one state to another without consuming any input symbols. These transitions are called
epsilon (ε) transitions.

• NFA: Allows epsilon transitions, which means the automaton can move to a new state
without reading any input.
• DFA (Deterministic Finite Automaton): Does not allow epsilon transitions; every
transition in a DFA must consume an input symbol.
• Pushdown Automaton: While it allows stack operations, epsilon transitions are not
specifically related to finite state machines like NFAs.
• All of the mentioned: Incorrect because DFA does not support epsilon transitions.

Q9.

Ans: A)

Explanation: The ε-closure of a set of states, P, of an NFAis defined as the set of states
reachable from any state in P following e-transitions.

Q10.

Ans: c)

• Detailed Solution: Both Nondeterministic Finite Automata (NFA) and Deterministic


Finite Automata (DFA) are computationally equivalent, meaning they recognize the
same class of languages: regular languages.
• NFA: Easier to design and can have multiple transitions for the same input or epsilon
transitions. However, it is not deterministic.
• DFA: Deterministic in nature and has no epsilon transitions or multiple transitions for the
same input. It is easier to implement in software or hardware.

While an NFA may appear more "flexible," any NFA can be converted into an equivalent DFA.
The DFA might have exponentially more states in the worst case, but it recognizes the exact
same language as the NFA.

Thus, NFAs and DFAs are equally powerful in terms of the languages they can recognize.

Q11.

Ans: A)

Explanation: The conversion of a non-deterministic automata into a deterministic one is a


process we call subset construction or power set construction.

Q12.

Ans: C)

Explanation: Thompson Construction method is used to turn a regular expression in an


NFA by fragmenting the given regular expression through the operations performed on the
input alphabets.

Q13.

Ans: D)

Detailed Solution: n the scenario where a compiler automatically corrects errors like changing
"fi" to "if," the error correction involves transposing the characters.

• Transpose character: This involves swapping two adjacent characters in a string to


correct a mistake, such as changing "fi" to "if."

You might also like