TCS Exam
TCS Exam
2 marks
1. DFA vs. NFA:
o DFA has one transi on per state for each input symbol, while NFA can have
mul ple or no transi ons.
o DFA requires more space and is harder to construct than NFA.
o NFA allows ε-transi ons, which are not permi ed in DFA.
o Every DFA can be considered an NFA, but not all NFAs can be reduced to DFAs.
2. Concatena on of languages:
o Combines strings from two languages, L₁ and L₂, as xy where x ∈ L₁, y ∈ L₂.
o Example: L₁ = {a}, L₂ = {b}; L₁L₂ = {ab}.
o Useful in building larger languages from simpler ones.
o The result is a set of concatenated strings from both languages.
3. Significance of finite automata:
o Provides a mathema cal model for digital computa on.
o Recognizes pa erns, validates inputs, and verifies strings in a language.
o Forms the basis of lexical analysis in compilers.
o Facilitates the study of language classes like regular and context-free languages.
4. DFA minimiza on:
o Reduces the number of states by merging equivalent states.
o Uses equivalence classes to iden fy redundant states.
o Improves computa onal efficiency while preserving language recogni on.
o Ensures the DFA is the simplest representa on of its language.
5. Formal defini on of DFA:
o DFA is represented as a 5-tuple (Q, Σ, δ, q₀, F).
o Q: Finite set of states; Σ: Input alphabet.
o δ: Transi on func on mapping Q × Σ → Q.
o q₀: Ini al state; F: Set of accep ng states.
6. NFA to DFA conversion:
o Convert NFA states into DFA states using the subset construc on method.
o Each DFA state represents a subset of NFA states.
o Add transi ons for all possible input symbols.
o Ensure equivalent language recogni on between NFA and resul ng DFA.
7. Closure proper es of regular languages:
o Closed under union: Combines strings from L₁ and L₂.
o Closed under concatena on: Strings formed by appending strings from L₁ to L₂.
o Closed under Kleene star: Infinite repe on of strings in the language.
o Also closed under intersec on and complementa on.
8. Regular expression (RE):
o Compact nota on for describing regular languages.
o Opera ons include union (|), concatena on, and Kleene star (*).
o Example: RE a*b generates strings with zero or more 'a's followed by one 'b'.
o Regular expressions can be converted to finite automata.
9. Pumping lemma significance:
o A tool to prove certain languages are not regular.
o States that sufficiently long strings in a regular language can be "pumped"
(repeated) without leaving the language.
o Helps iden fy non-regular languages through contradic ons.
o Mainly applied to infinite regular languages.
10.Le -linear grammar:
Produc ons are of the form A → aB or A → a.
Non-terminal appears on the le -hand side of deriva ons.
Generates a subset of regular languages.
O en used in the design of parsers and le most deriva ons.
11.Parse tree:
A hierarchical tree representa on of the deriva on of a string in a grammar.
Root node represents the start symbol of the grammar.
Interior nodes are non-terminals, and leaf nodes are terminals.
Helps visualize the structure of a string according to grammar rules.
12.Chomsky hierarchy:
Categorizes grammars into four types: Type 0 (Unrestricted), Type 1 (Context-
sensi ve), Type 2 (Context-free), and Type 3 (Regular).
Each type has increasing restric ons and complexity.
Regular languages are the simplest; context-sensi ve languages are the most
complex.
Provides a theore cal framework for understanding computa onal power.
13.Pushdown automaton (PDA):
A computa onal model extending finite automata with a stack.
Used to recognize context-free languages.
Can store intermediate computa ons in the stack.
Transi ons depend on the current input, state, and stack symbol.
14.Difference between DPDA and NPDA:
DPDA (Determinis c PDA): Single transi on per input, stack, and state.
NPDA (Non-determinis c PDA): Mul ple transi ons for the same input configura on.
DPDA can recognize a subset of context-free languages, while NPDA recognizes all.
NPDA uses non-determinis c choices to simplify certain computa ons.
15.Steps to convert CFG to CNF:
Remove null produc ons (produc ons genera ng empty strings).
Eliminate unit produc ons (A → B, where B is a non-terminal).
Simplify the grammar by removing useless symbols.
Convert all produc ons to the form A → BC or A → a.
16.Importance of context-free languages:
Defines the syntax of most programming languages.
Recognized by pushdown automata.
Provides a founda on for compilers and interpreters.
Extends regular languages with hierarchical and nested structures.
17.Greibach Normal Form (GNF):
CFG in which every produc on is of the form A → aα, where a is a terminal, and α is a
sequence of non-terminals.
Useful in parsing algorithms like top-down parsing.
Helps in simplifying grammar transforma ons.
Equivalent to other CFG forms like Chomsky Normal Form.
18.Components of a Turing machine:
Tape: Infinite memory divided into cells.
Head: Reads and writes symbols on the tape.
State register: Tracks the current state.
Transi on func on: Determines the next state, tape symbol, and head movement.
19.Mul -tape Turing machine:
A Turing machine with mul ple tapes, each with its own head.
Enhances computa onal efficiency by allowing parallel data processing.
Equivalent in computa onal power to single-tape TMs.
Easier to design algorithms for certain problems.
20.Difference between Mealy and Moore machines:
Mealy machine: Output depends on both the current state and input.
Moore machine: Output depends only on the current state.
Mealy machines generally have fewer states.
Both can represent the same set of func ons.
21.Ambiguous grammar:
A CFG where a string can have more than one valid parse tree.
Example: Grammar genera ng arithme c expressions.
Causes complica ons in parsing and language analysis.
Ambiguity is resolved by grammar simplifica ons or using precedence rules.
22.Regular grammar in automata:
Equivalent to finite automata.
Produc ons are of the form A → aB or A → a.
Simplifies the design of finite automata.
Generates regular languages efficiently.
23.Advantages of DFA minimiza on:
Reduces the number of states, simplifying the automaton.
Improves performance and reduces memory usage.
Retains the same language recogni on capability.
Ensures the automaton is minimal for the language.
24.Intersec on of languages:
Combines common strings from L₁ and L₂.
Example: L₁ = {a, b}, L₂ = {a, c}; L₁ ∩ L₂ = {a}.
Used to find shared pa erns between languages.
Regular languages are closed under intersec on.
25.ε-closure with example:
Set of all states reachable from a given state using ε-moves.
Example: If δ(q, ε) = {q₁, q₂}, ε-closure(q) = {q, q₁, q₂}.
Useful in NFA-to-DFA conversion.
Helps handle non-determinism in automata.
26.Purpose of accept states in automata:
Indicates the strings that belong to the language.
The automaton halts in the accept state for valid inputs.
Necessary for language recogni on.
Used to define the language of the automaton.
27.Applica ons of formal languages:
Syntax analysis in programming languages.
Text pa ern matching (e.g., regular expressions).
Data compression and transmission protocols.
Ar ficial intelligence for natural language processing.
28.Language recogni on in DFA:
DFA reads strings from the input alphabet.
Transi ons through states based on inputs.
Halts in accept state for valid strings.
Rejects strings if no valid transi ons exist.
29.Difference between FA and PDA:
FA: No stack; recognizes regular languages.
PDA: Uses a stack; recognizes context-free languages.
PDA can handle nested structures; FA cannot.
PDA is more powerful but less efficient than FA.
30.Rela onship between LBA and CFG:
Linear Bounded Automata (LBA) recognizes context-sensi ve languages.
CFG generates context-free languages, which are a subset of context-sensi ve
languages.
LBAs extend finite automata with memory limita ons.
Provides a bridge between automata theory and formal gramma