0% found this document useful (0 votes)
6 views

CD Lab Manual

This document is a laboratory manual for Compiler Design, detailing practical exercises for B.E. Computer Engineering students at Gujarat Power Engineering and Research Institute for the academic year 2024-25. It includes various programming tasks such as implementing finite automata, using the Lex tool, and creating parsers in Python. Each practical aims to enhance understanding of compiler design concepts through hands-on coding experience.

Uploaded by

yoyoht12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

CD Lab Manual

This document is a laboratory manual for Compiler Design, detailing practical exercises for B.E. Computer Engineering students at Gujarat Power Engineering and Research Institute for the academic year 2024-25. It includes various programming tasks such as implementing finite automata, using the Lex tool, and creating parsers in Python. Each practical aims to enhance understanding of compiler design concepts through hands-on coding experience.

Uploaded by

yoyoht12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Enrollment No: _

LABORATORY MANUAL
Compiler Design (3170701)
B.E. SEM. VII (COMPUTER), Year 2024-25 (Odd Sem.)

DEPARTMENT OF COMPUTER ENGINEERING

GUJARAT POWER ENGINEERING AND RESEARCH INSTITUTE,


MEHSANA

(Managed By Gujarat Technological University)

Near Toll Booth, Ahmedabad-Mehsana Express way, Village-Mewad


District Mehsana-Gujarat
Contact No. 02762- 292262/63
GUJARAT POWER ENGINEERING AND
RESEARCH INSTITUTE, MEHSANA
(Managed By Gujarat Technological University)

DEPARTMENT OF COMPUTER ENGINEERING

CERTIFICATE
This is to certify that the work carried out by Mr. /Ms.

Enrollment No. Student of

computerEngineering, semester VII of session 2024-2025 has satisfactorily completed at

Gujarat PowerEngineering and Research Institute, Mehsana. This manual is record of

his/her own work of subject carried out under

competent supervision and guidance.

Date of Submission:

Faculty In-charge Head of Department

Prof. Hemal M Patel Prof. Avani Raval


GUJARAT POWER ENGINEERING AND RESEARCH INSTITUTE, MEHSANA
(Managed By Gujarat Technological University)
B.E. SEM. VII (COMPUTER), Year: 2024-25 (Odd Sem.)
3170701: COMPILER DESIGN

INDEX

No. Title Date Sign

1 Implementation of finite automata and string validation.

2 Study about the Lex tool.

Write a Python program to identify whether a given line is


3 comment or not.
Write a Python program to remove left recursion from
4
given grammar.
Write a Python program to find a FIRST set of given
5
grammar.
Write a Python program to find a FOLLOW set of given
6
grammar.
7 Write a Python program toconstruct LL (1) parser.
Write a Python program toconstruct recursive descent
8
parser.
Write a Python program to implement operator precedence
9
parsing.

10 Write a Python program to implement LALR parsing.


COMPILER DESIGN 211040107005

PRACTICAL-1
Aim: Implementation of Finite Automata and String Validation.
defaccepts(transition, initial, accepting, s):
state = initial
for char in s:
state = transition[state].get(char, None) if
state is None:
return False
return state in accepting

#DFA transitions:
dfa = {
'q0': {'a': 'q1', 'b': 'q3'},
'q1': {'a': 'q3', 'b': 'q2'},
'q2': {'a': 'q1', 'b': 'q0'},
'q3': {'a': 'q3', 'b': 'q3'}
}

input_string = input("Enter a string:")


if accepts(dfa, 'q0', {'q2'}, input_string):
print("Accepted")
else:
print("Not Accepted")

4
COMPILER DESIGN 211040107005

OUTPUT:

5
COMPILER DESIGN 211040107005

PRACTICAL-2
Aim: Introduction to Lex Tool
Lex is a tool or a computer program that generates Lexical Analysers (converts the stream of
characters into tokens). The Lex tool itself is a compiler. The Lex compiler takes the input
and transforms that input into input patterns. It is commonly used with YACC (Yet Another
Compiler Compiler). It was written by Mike Lesk and Eric Schmidt.
Function of Lex
1. In the first step the source code which is in the Lex language having the file name
‘File.l’ gives as input to the Lex Compiler commonly known as Lex to get the output
as lex.yy.c.
2. After that, the output lex.yy.c will be used as input to the C compiler which gives the
output in the form of an ‘a.out’ file, and finally, the output file a.out will take the
stream of character and generates tokens as output.

Block diagram of Lex

6
COMPILER DESIGN 211040107005

Lex File Format

A Lex program consists of three parts and is separated by %% delimiters:-


Declarations
%%
Translation rules
%%
Auxiliary procedures

 Declarations: The declarations include declarations of variables.


 Transition rules: These rules consist of Pattern and Action.
 Auxiliary procedures: The Auxiliary section holds auxiliary functions used in
the actions.

For example:

Declaration
Number[0-9]
%%
translation
if{return(IF);}
auxiliary function
intnumberSum()

7
COMPILER DESIGN 211040107005

Example: Write a Lex program to recognize and display keywords,


numbers and words in a given statement.
/*lex code to determine whether input is an identifier or not*/
%{
#include <stdio.h>
%
}

/ rule section % %
// regex for valid identifiers
^[a - z A - Z _][a - z A - Z 0 - 9 _] * printf("Valid Identifier");

// regex for invalid identifiers


^[^a - z A - Z _] printf("Invalid Identifier");
.;
%%

main()
{
yylex();
}
OUTPUT:

8
COMPILER DESIGN 211040107005

PRACTICAL-3

Aim: Write a python program to check whether a given string is a


comment or not.

defis_comment(user_input):

user_input = user_input.strip() if

user_input.startswith("#"):

return True

if user_input.startswith("//"): return

True

if user_input.startswith("/*") and user_input.endswith("*/"):

return True

return False

user_input = input("Enter a string: ")

if is_comment(user_input):

print("The string is a comment.")

else:

print("The string is not a comment.")

9
COMPILER DESIGN 211040107005

OUTPUT:

10
COMPILER DESIGN 211040107005

PRACTICAL-4

Aim: Write a Python program to remove left recursion from given grammar.

def eliminate_left_recursion(grammar):

new_grammar = {}

for non_terminal, productions in grammar.items():

new_productions = []

for production in productions:

if production[0] == non_terminal:

# Left recursion detected, eliminate it new_non_terminal

= non_terminal + "'" new_production = production[1:] +

[new_non_terminal]

new_productions.append(new_production)

new_grammar[new_non_terminal] = [[new_non_terminal, production[1:]], ['ε']]

else:

new_productions.append(production)

new_grammar[non_terminal] = new_productions

return new_grammar

grammar = {

"A": [["A", "a"], ["b"]],

"B": [["B", "c"], ["d"]],

"C": [["C", "e"], ["f"]]

11
COMPILER DESIGN 211040107005

new_grammar = eliminate_left_recursion(grammar)

my_dict = {'a':1,'b':2,'c':3}

for key,value in new_grammar.items():

print(f"{key}:{value}")

OUTPUT:

12
COMPILER DESIGN 211040107005

PRACTICAL-5

Aim: Write a python program to find the first set for given grammar.

def find_first_set(grammar):

first_set = {}

for non_terminal in grammar:

first_set[non_terminal] = set()

while True:

old_first_set = first_set.copy()

for non_terminal, productions in grammar.items(): for

production in productions:

if production[0].islower():

first_set[non_terminal].add(production[0])

else:

first_set[non_terminal].update(first_set[production[0]])

if old_first_set == first_set:

break

13
COMPILER DESIGN 211040107005

return first_set

# Example usage:

grammar = {

'S': ['a'],

'A': ['c'],

'B': ['b', 'ε'],

'C': ['d', 'ε']

first_set = find_first_set(grammar)

print("First set:")

for non_terminal, terminals in first_set.items():

print(f"{non_terminal}: {', '.join(terminals)}")

14
COMPILER DESIGN 211040107005

OUTPUT:

15
COMPILER DESIGN 211040107005

PRACTICAL-6

Aim: Write a python program to find the follow set for given grammar.
deffind_follow_set(grammar, first_set):

follow_set = {}

for non_terminal in grammar:

follow_set[non_terminal] = set()

start_symbol = list(grammar.keys())[0]

follow_set[start_symbol].add('$')

while True:

old_follow_set = follow_set.copy()

for non_terminal, productions in grammar.items(): for

production in productions:

for i in range(len(production) - 1):

if production[i].isupper():

next_symbol = production[i + 1]

if next_symbol.islower():

follow_set[production[i]].update({next_symbol})

else:

follow_set[production[i]].update(first_set[next_symbol])

16
COMPILER DESIGN 211040107005

if production[-1].isupper():

follow_set[production[-1]].update(follow_set[start_symbol])

if old_follow_set == follow_set:

break

return follow_set

# Example usage:

grammar = {

'S': ['AB', 'AC'],

'A': ['a', 'b'],

'B': ['c', 'd'],

'C': ['e', 'f']

first_set = {

'S': {'a',

'b'},

'A': {'a', 'b'},

'B': {'c', 'd'},

'C': {'e', 'f'}

17
COMPILER DESIGN 211040107005

follow_set = find_follow_set(grammar, first_set)

print("Follow set:")

for non_terminal, terminals in follow_set.items():

print(f"{non_terminal}: {', '.join(terminals)}")

OUTPUT:

18
COMPILER DESIGN 211040107005

PRACTICAL-7

Aim: Write a python program to implement LL(1) parser for a given


grammar.
class LL1Checker:

first_set={}

follow_set={}

def init (self, grammar):

self.grammar = grammar

self.first_set = self.compute_first_set()

self.follow_set = self.compute_follow_set()

defcompute_first_set(self):

first_set = {}

for non_terminal, productions in self.grammar.items():

first_set[non_terminal] = set()

for production in productions:

if production[0].islower(): # terminal symbol

first_set[non_terminal].add(production[0])

else: # non-terminal symbol

first_set[non_terminal].update(self.compute_first_set_for_non_terminal(production[0]))

return first_set

defcompute_first_set_for_non_terminal(self, non_terminal):

19
COMPILER DESIGN 211040107005

if non_terminal not in self.first_set:

self.first_set[non_terminal] = set()

for production in self.grammar[non_terminal]:

if production[0].islower(): # terminal symbol

self.first_set[non_terminal].add(production[0]) else:

# non-terminal symbol

self.first_set[non_terminal].update(self.compute_first_set_for_non_terminal(production[ 0]))

return self.first_set[non_terminal]

defcompute_follow_set(self):

follow_set = {}

for non_terminal in self.grammar:

follow_set[non_terminal] = set()

follow_set[self.get_start_symbol()] = {'$'} # add the end symbol to the follow set of the
start symbol

for non_terminal, productions in self.grammar.items(): for

production in productions:

for i in range(len(production) - 1):

if production[i].isupper(): # non-terminal symbol

follow_set[production[i]].update(self.first_set[production[i + 1]])

if '' in self.first_set[production[i + 1]]:

follow_set[production[i]].update(follow_set[non_terminal])

20
COMPILER DESIGN 211040107005

return follow_set

defget_start_symbol(self):

for non_terminal, productions in self.grammar.items():

if len(productions) > 0 and productions[0][0].isupper(): # non-terminal symbol

return non_terminal

return None

def is_ll1(self):

for non_terminal, productions in self.grammar.items(): for

i in range(len(productions)):

for j in range(i + 1, len(productions)):

first_set_i = self.first_set[productions[i][0]]

first_set_j = self.first_set[productions[j][0]]

if first_set_i&first_set_j: # intersection is not empty

return False

if '' in first_set_i:

if follow_set[non_terminal] &first_set_j: # intersection is not emptyreturn

False

return True

# Example usage:

21
COMPILER DESIGN 211040107005

grammar = {

'S': ['AB', 'AC'],

'A': ['a', 'b'],

'B': ['c', 'd'],

'C': ['e', 'f']

checker = LL1Checker(grammar)

if checker.is_ll1():

print("The grammar is LL(1)")

else:

print("The grammar is not LL(1)")

OUTPUT:

22
COMPILER DESIGN 211040107005

PRACTICAL-8

Aim: Write a python program to implement Recursive Decent Parser for a


given grammar.
print("Recursive Desent Parsing For following
grammar\n") print("E->iE'\nE'->+iE'/@\n")
print("Enter the string want to be checked\n")
global s
s=list(input()
) global i
i=0
def match(a):
global s
global i
if(i>=len(s))
:
return False
elif(s[i]==a):
i+=1
return True
else:
return False
defF():
if(match("(")):
if(E()):
if(match(")"))
: return True
else:
return False
else:
return False

23
COMPILER DESIGN 211040107005

elif(match("i")):
return True
else:
return False
defTx():
if(match("*")):
if(F()):
if(Tx()):
return True
else:
return False
else:
return False
else:
return True
defT():
if(F()):
if(Tx()):
return True
else:
return False
else:
return False
defEx():
if(match("+")):
if(T()):
if(Ex()):
return True
else:

24
COMPILER DESIGN 211040107005

return False
else:
return False
else:
return True
defE():
if(T()):
if(Ex()):
return True
else:
return False
else:
return False
if(E()):
if(i==len(s)):
print("String is accepted")
else:
print("String is not accepted")

else:
print("string is not accepted")

25
COMPILER DESIGN 211040107005

OUTPUT:

26
COMPILER DESIGN 211040107005

PRACTICAL-9

Aim: Write a python program to implement Operator Precedence Parser.


MAX = 10

EXPRESSION_MAX = 100

def precedence(op):

if op == '+':

return 1 # Lowest precedence

elif op == '*':

return 2 # Higher precedence elif

op == '(':

return 0 # No precedence

elif op == ')':

return 0 # No precedence

else:

return -1 # Invalid operator

defcreate_table(operators):

table = [[0 for _ in range(len(operators))] for _ in range(len(operators))]

for i in range(len(operators)):

for j in range(len(operators)):

if i == j:

table[i][j] = 0 # Equal precedence

elif precedence(operators[i]) > precedence(operators[j]):

27
COMPILER DESIGN 211040107005

table[i][j] = 1 # First operator has higher precedence

elif precedence(operators[i]) < precedence(operators[j]):

table[i][j] = -1 # Second operator has higher precedence

else:

if operators[i] in '+*':

table[i][j] = 1 # Left associative

else:

table[i][j] = 0 # Invalid case

return table

defdisplay_table(operators, table):

print("Operator Precedence Table:")

print(" ", end='')

for op in operators:

print(f" {op} ", end='')

print()

for i in range(len(operators)):

print(f" {operators[i]} |", end='')

for j in range(len(operators)):

if table[i][j] == 1:

print(" > ", end='')

elif table[i][j] == -

1: print(" < ",

end='')

28
COMPILER DESIGN 211040107005

else:

print(" = ", end='')

print()

defparse_expression(expression, operators):

print(f"\nParsing expression: {expression}")

for char in expression:

if char in operators:

print(f"Operator: {char}, Precedence: {precedence(char)}")

elif char == 'i' and expression[expression.index(char) + 1] ==

'd': print("Identifier: id")

continue # Skip the next character as it's part of 'id'

defmain(): operators

= "+*()"

table = create_table(operators)

display_table(operators, table)

expression = input("\nEnter an expression : ")

expression = expression.strip() # Remove newline character

parse_expression(expression, operators)

if name == " main ":

main()

29
COMPILER DESIGN 211040107005

OUTPUT:

30
COMPILER DESIGN 211040107005

PRACTICAL-10

Aim: Write a python program to implement LALR(1) parser for the given
grammar.
class
ActionType:

SHIFT = 0

REDUCE = 1

ACCEPT = 2

ERROR = 3

class Action:

def init (self, action_type, value):

self.type = action_type

self.value = value

MAX_STATES = 4

MAX_SYMBOLS = 3

parsing_table = [[None for _ in range(MAX_SYMBOLS)] for _ in range(MAX_STATES)]

definitialize_parsing_table():

# State 0

parsing_table[0][0] = Action(ActionType.SHIFT, 1) # on 'a'

parsing_table[0][1] = Action(ActionType.SHIFT, 2) # on 'b'

# State 1

31
COMPILER DESIGN 211040107005

parsing_table[1][2] = Action(ActionType.ACCEPT, 0) # on '$'

# State 2

parsing_table[2][0] = Action(ActionType.SHIFT, 1) # on 'a'

parsing_table[2][1] = Action(ActionType.REDUCE, 1) # A -> b

# State 3

parsing_table[3][0] = Action(ActionType.REDUCE, 0) # A ->aA

parsing_table[3][1] = Action(ActionType.REDUCE, 0) # A -

>aA

defprint_parsing_table():

print("Parsing Table:")

print("State\t a\t\t b\t\t $")

print("_" *30)

for i in range(MAX_STATES):

print(f"{i}\t\t", end="")

for j in

range(MAX_SYMBOLS):

action = parsing_table[i][j]

if action is not None:

if action.type == ActionType.SHIFT:

print(f"S{action.value}\t\t", end="")

elifaction.type == ActionType.REDUCE:

print(f"R{action.value}\t\t", end="")

elifaction.type == ActionType.ACCEPT:

32
COMPILER DESIGN 211040107005

print("Accept\t\t", end="")

else:

print("-\t\t", end="")

else:

print("-\t\t", end="")

print()

if name == " main

": initialize_parsing_table()

print_parsing_table()

OUTPUT:

33

You might also like