0% found this document useful (0 votes)
21 views

Practical File Compiler Design

The document discusses implementing a calculator using Flex and Yacc tools. Flex is used to generate a lexical analyzer by specifying regular expressions in a .l file. Yacc is used to generate a parser by specifying grammar rules in a .y file. The calculator evaluates arithmetic expressions with parentheses, unary and binary operators. It reads an expression from input, tokenizes it using the lexer generated by Flex, and parses it using the parser generated by Yacc to evaluate the expression.

Uploaded by

Akash Rajput
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Practical File Compiler Design

The document discusses implementing a calculator using Flex and Yacc tools. Flex is used to generate a lexical analyzer by specifying regular expressions in a .l file. Yacc is used to generate a parser by specifying grammar rules in a .y file. The calculator evaluates arithmetic expressions with parentheses, unary and binary operators. It reads an expression from input, tokenizes it using the lexer generated by Flex, and parses it using the parser generated by Yacc to evaluate the expression.

Uploaded by

Akash Rajput
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

INDEX

S.No. Name Of Experiment P.No. Date Signature


1. Tokenizing a file using C 3
2. Implementation of Lexical Analyzer using Lex Tool 9
3. Study the LEX and YACC tool and evaluate an arithmetic 13
expression with parentheses, unary and binary operators using
Flex and YACC (CALCULATOR)
4. Using JFLAP, create a DFA from a given regular expression 20
5. Create LL(1) parse table for a given CFG and hence Simulate 24
LL(1) Parsing
6. Using JFLAP create SLR(1) parse table for a given grammar. 26
Simulate parsing and output the parse tree proper format.
7. Write functions to find FIRST and FOLLOW of all the variables. 28
Experiment – 1
Tokenizing a file using C
Aim: (Tokenizing). A program that reads a source code in C/C++ from an unformatted file
and extract various types of tokens from it (e.g. keywords/variable names, operators,
constant values).

Description: Lexical analysis is the process of converting a sequence of characters (such


as in a computer program of web page) into a sequence of tokens (strings with an
identified “meaning”). A program that perform lexical analysis may be called a lexer,
tokenize or scanner.

Token
A token is a structure representing a lexeme that explicitly indicates its categorization for the
Purpose of parsing. A category of token is what in linguistics might be called a part-of-
speech. Examples of token categories may include “identifier” and “integer literal”, although
the set of Token differ in different programming languages. The process of forming tokens
from an input stream of characters is called tokenization. Consider this expression in the C
programming language:
Sum=3 + 2;
Tokenized and represented by the following table:

CSE Department
Algorithm:-

1. Start the program


2. Include the header files.
3. Allocate memory for the variable by dynamic memory allocation function.
4. Use the file accessing functions to read the file.
5. Get the input file from the user.
6. Separate all the file contents as tokens and match it with the functions.
7. Define all the keywords in a separate file and name it as key.c
8. Define all the operators in a separate file and name it as open.c
9. Give the input program in a file and name it as input.c
10. Finally print the output after recognizing all the tokens.
11. Stop the program.

Program:

#include<stdio.h>
#include<conio.h>
#include<ctype.h>
#include<string.h>
void main()
{
FILE *fi,*fo,*fop,*fk; int
flag=0,i=1; char
c,t,a[15],ch[15],file[20];
clrscr(); printf("\n Enter the
File Name:");
scanf("%s",&file);
fi=fopen(file,"r");
fo=fopen("inter.c","w");
fop=fopen("oper.c","r");

CSE Department
fk=fopen("key.c","r");
c=getc(fi); while(!feof(fi))
{
if(isalpha(c)||isdigit(c)||(c=='['||c==']'||c=='.'==1
)) fputc(c,fo); else
{
if(c=='\n')
fprintf(fo,"\t$\t"); else
fprintf(fo,"\t%c\t",c);
}
c=getc(fi);
} fclose(fi); fclose(fo);
fi=fopen("inter.c","r");
printf("\n Lexical
Analysis");
fscanf(fi,"%s",a);
printf("\n Line:
%d\n",i++);
while(!feof(fi))
{
if(strcmp(a,"$")==0)
{
printf("\n Line: %d \n",i++);
fscanf(fi,"%s",a);
}
fscanf(fop,"%s",ch);
while(!feof(fop))
{
if(strcmp(ch,a)==0)
{
fscanf(fop,"%s",ch);
printf("\t\t%s\t:\t%s\n",a,ch);
flag=1;
} fscanf(fop,"%s",ch);
}
rewind(fop);
fscanf(fk,"%s",ch);
while(!feof(fk))
{
if(strcmp(ch,a)==0)

CSE Department
{
fscanf(fk,"%k",ch);
printf("\t\t%s\t:\tKeyword\n",a);
flag=1;
}
fscanf(fk,"%s",ch);
}
rewind(fk);
if(flag==0)
{ if(isdigit(a[0]))
printf("\t\t%s\t:\tConstant\n",a)
; else
printf("\t\t%s\t:\tIdentifier\n",a
);
}
flag=0;
fscanf(fi,"%s",a); }
getch();
}
Input Files:
Oper.c

( open para
) closepara
{ openbrace
} closebrace
< lesser
> greater
" doublequote ' singlequote
: colon
; semicolon
# preprocessor
= equal
== asign
% percentage
^ bitwise
& reference
* star
+ add

CSE Department
- sub
\ backslash
/ slash
Key.C
int
void
main
char
if for
while else
printf
scanf
FILE
Include
stdio.h
conio.h
iostream.
h

Input.c
#include
"stdio.h"
#include
"conio.h" void
main()
{
int a=10,b,c;
a=b*c;
getch();
}
Output:-

CSE Department
CSE Department
Experiment 2
Implementation of Lexical Analyzer using Lex Tool

Aim: (Tokenizing) Use Lex and yacc to extract tokens from a given source code.

Description:

➢ A language for specifying lexical analyzer.


➢ There is a wide range of tools for construction of lexical analyzer. The majority of these
tools are based on regular expressions.
➢ The one of the traditional tools of that kind is lex.

Lex:-

➢ The lex is used in the manner depicted. A specification of the lexical analyzer is
preferred by creating a program lex.1 in the lex language.
➢ Then lex.1 is run through the lex compiler to produce a ‘c’ program lex.yy.c.
➢ The program lex.yy.c consists of a tabular representation of a transition diagram
constructed from the regular expression of lex.1 together with a standard routine that
uses table of recognize leximes.
➢ Lex.yy.c is run through the ‘C’ compiler to produce as object program a.out, which is the
lexical analyzer that transform as input stream into sequence of tokens.

Algorithm:
1. First, a specification of a lexical analyzer is prepared by creating a program
lexp.l in the LEX language.
2. The Lexp.l program is run through the LEX compiler to produce an equivalent
code in C language named Lex.yy.c
3. The program lex.yy.c consists of a table constructed from the Regular
Expressions of Lexp.l, together with standard routines that uses the table to
recognize lexemes.
4. Finally, lex.yy.c program is run through the C Compiler to produce an object
program a.out, which is the lexical analyzer that transforms an input stream into
a sequence of tokens.
Program:
lexp.l %{ int
COMMENT=0;

CSE Department
%} identifier [a-zA-Z][a-
zA-Z0-9]*
%%
#.* {printf ("\n %s is a Preprocessor
Directive",yytext);} int | float | main |
if | else | printf | scanf | for | char | getch
| while {printf("\n %s is a
Keyword",yytext);}
"/*" {COMMENT=1;}
"*/" {COMMENT=0;}
{identifier}\( {if(!COMMENT) printf("\n Function:\t %s",yytext);}
\{ {if(!COMMENT) printf("\n Block Begins");
\} {if(!COMMENT) printf("\n Block Ends");}
{identifier}(\[[0-9]*\])? {if(!COMMENT) printf("\n %s is an Identifier",yytext);}
\".*\" {if(!COMMENT) printf("\n %s is a String",yytext);}
[0-9]+ {if(!COMMENT) printf("\n %s is a Number",yytext);}
\)(\;)? {if(!COMMENT) printf("\t");ECHO;printf("\n");}
\( ECHO;
= {if(!COMMENT) printf("\n%s is an Assmt oprtr",yytext);}
\<= |
\>= |
\< |
== {if(!COMMENT) printf("\n %s is a Rel. Operator",yytext);}
.|\n
%%
int main(int argc, char **argv)
{
if(argc>1)
{
FILE *file;
file=fopen(argv[1],"r");
if(!file)
{
printf("\n Could not open the file: %s",argv[1]);
exit(0);
}
yyin=file;
}

CSE Department
yylex();
printf("\n\n");
return 0;
}
int yywrap()
{
return 0;
}
Output:
test.c
#include<stdio.h>
main()
{
int fact=1,n; for(int i=1;i<=n;i++)
{ fact=fact*i; } printf("Factorial
Value of N is", fact); getch();
}
$ lex lexp.l
$ cc lex.yy.c
$ ./a.out test.c
#include<stdio.h> is a Preprocessor Directive
Function: main(
) Block Begins
int is a Keyword
fact is an
Identifier
= is an Assignment Operator
1 is a Number n is an
Identifier Function: for(
int is a Keyword i is an
Identifier = is an
Assignment Operator 1 is
a Number
i is an Identifier
<= is a Relational
Operator n is an
Identifier i is an
Identifier
);

CSE Department
Block Begins
fact is an
Identifier
= is an Assignment
Operator fact is an
Identifier i is an Identifier
Block Ends
Function: printf( "Factorial
Value of N is" is a String fact is
an Identifier ); Function: getch(
);
Block Ends

CSE Department
Experiment-3

Study of LEX and YACC Tool

Aim: Study the LEX and YACC tool and Evaluate an arithmetic expression with parentheses,
unary and binary operators using Flex and Yacc. [Need to write yylex() function and to be used
with Lex and yacc.].

Description:

LEX-A Lexical analyzer generator:

Lex is a computer program that generates lexical analyzers ("scanners" or "lexers").Lex is


commonly used with the yacc parser generator.

Lex reads an input stream specifying the lexical analyzer and outputs source code implementing
the lexer in the C programming language

1. A lexer or scanner is used to perform lexical analysis, or the breaking up of an input


stream into meaningful units, or tokens.
2. For example, consider breaking a text file up into individual words.
3. Lex: a tool for automatically generating a lexer or scanner given a lex specification (.l
file).

Structure of a Lex file

The structure of a Lex file is intentionally similar to that of a yacc file; files are divided up into
three sections, separated by lines that contain only two percent signs, as follows:

Definition section:
%%
Rules section:
%%
C code section:
<statements>

➢ The definition section is the place to define macros and to import header files
written in C. It is also possible to write any C code here, which will be copied
verbatim into the generated source file.

CSE Department
➢ The rules section is the most important section; it associates patterns with C
statements. Patterns are simply regular expressions. When the lexer sees some
text in the input matching a given pattern, it executes the associated C code. This
is the basis of how Lex operates.
➢ The C code section contains C statements and functions that are copied verbatim
to the generated source file. These statements presumably contain code called by
the rules in the rules section. In large programs it is more convenient to place this
code in a separate file and link it in at compile time.

Description:-
The lex command reads File or standard input, generates a C language program, and writes it to a
file named lex.yy.c. This file, lex.yy.c, is a compilable C language program. A C++ compiler
also can compile the output of the lex command. The -C flag renames the output file to lex.yy.C
for the C++ compiler. The
C++ program generated by the lex command can use either STDIO or
IOSTREAMS. If the cpp define _CPP_IOSTREAMS is true during a C++ compilation, the
program uses IOSTREAMS for all I/O. Otherwise, STDIO is used.

The lex command uses rules and actions contained in File to generate a program, lex.yy.c,which
can be compiled with the cc command. The compiled lex.yy.c can then receive input, break the
input into the logical pieces defined by the rules in File, and run program fragments contained
in the actions in File.

The generated program is a C language function called yylex. The lex command stores the yylex
function
in a file named lex.yy.c. You can use the yylex function alone to recognize simple one-word
input, or you can use it with other C language programs to perform more difficult input analysis
functions. For example, you can use the lex command to generate a program that simplifies an
input stream before sending it to a parser program generated by the yacc command.
The yylex function analyzes the input stream using a program structure called a finite state
machine. This structure allows the program to exist in only one state (or condition) at a time.
There is a finite number of states allowed. The rules in File determine how the program moves
from one state to another. If you do not specify a File, the lex command reads standard input. It
treats multiple files as a single file.
Note: Since the lex command uses fixed names for intermediate and output files, you can have
only one program generated by lex in a given directory.

Regular Expression Basics


. : matches any single character except \n
* : matches 0 or more instances of the preceding regular expression

CSE Department
+ : matches 1 or more instances of the preceding regular expression
? : matches 0 or 1 of the preceding regular expression
| : matches the preceding or following regular expression
[ ] : defines a character class
() : groups enclosed regular expression into a new regular expression
“…”: matches everything within the “ “ literally

Special Functions
• yytext
– where text matched most recently is stored
• yyleng
– number of characters in text most recently matched
• yylval
– associated value of current token
• yymore()
– append next string matched to current contents of yytext
• yyless(n)
– remove from yytext all but the first n characters
• unput(c)
– return character c to input stream
• yywrap()
– may be replaced by user
– The yywrap method is called by the lexical analyser whenever it inputs an EOF as the first
character when trying to match a regular expression

Files
y.output--Contains a readable description of the parsing tables and a report on conflicts generated
by grammar ambiguities.
y.tab.c ---- Contains an output file.
y.tab.h ----- Contains definitions for token names.
yacc.tmp ---- Temporary
file. yacc.debug ---
Temporary file. yacc.acts --
-- Temporary file.
/usr/ccs/lib/yaccpar -- Contains parser prototype for C programs.
/usr/ccs/lib/liby.a---- Contains a run-time library.

YACC: Yet Another Compiler-Compiler

CSE Department
Yacc is written in portable C. The class of specifications accepted is a very general one:LALR(1)
grammars with disambiguating rules.

Basic specification
Names refer to either tokens or non-terminal symbols. Yacc requires tokens names to be declared
as such. In addition, for reasons discussed in section 3, it is often desirable to include the lexical
analyzer as part of the specification file, I may be useful to include other programs as well. Thus,
the sections are separated by double percent “%%” marks. (the percent‟%‟ is generally used in
yacc specifications as an escape character). In other words, a full specification file looks like.
In other words a full specification file looks like
Declarations
%%
Rules
%%
Programs
The declaration section may be empty. More over if the programs section is omitted, the second
%% mark may be omitted also thus the smallest legal yacc specification is
%%
Rules
Blanks, tabs and newlines are ignored except that they may not appear in
names or multi-character reserved symbols. Comments may appear wherever
legal, they are enclosed in /*….*/ as in C and PL/l
The rules section is made up of one or more grammar rule has the form:
A:BODY:
USING THE LEX PROGRAM WITH THE YACC PROGRAM

The Lex program recognizes only extended regular expressions and formats them into character
packages called tokens, as specified by the input file. When using the Lex program to make a
lexical analyzer for a parser, the lexical analyzer (created from the Lex command) partitions the
input stream. The parser(from the yacc command) assigns structure to the resulting pieces. You
can also use other programs along with programs generated by Lex or yacc commands.

CSE Department
A token is the smallest independent unit of meaning as defined by either the parser or the lexical
analyzer. A token can contain data, a language keyword, an identifier or the parts of language
syntax.

The yacc program looks for a lexical analyzer subroutine named yylex, which is generated by the
lex command. Normally the default main program in the Lex library calls the yylex subroutines.
However if the yacc command is loaded and its main program is used , yacc calls the yylex
subroutines. In this case each Lex rule should end with:
return (token);

Where the appropriate token value is returned


The yacc command assigns an integer value to each token defined in the yacc grammar file
through a # define preprocessor statement.
The lexical analyzer must have access to these macros to return the tokens to the parser. Use the
yacc – d option to create a y.tab.h file and include the y.tab.h file in the Lex specification file by
adding the following lines to the definition section of the Lex specification file:
%{
#include “y.tab.h”
%}
Alternatively you can include the lex.yy.c file the yacc output file by adding the following lines
after the second %% (percent sign, percent sign) delimiter in the yacc grammar file:
#include”lex.yy.c”
The yacc library should be loaded before the Lex library to get a main program that invokes the
yacc parser. You can generate Lex and yacc programs in either order.

Evaluating Arithmetic Expression using different Operators (Calculator)

Algorithm:
1) Get the input from the user and Parse it token by token.

2) First identify the valid inputs that can be given for a program.

3) The Inputs include numbers, functions like LOG, COS, SIN, TAN, etc. and operators.

4) Define the precedence and the associativity of various operators like +,-,/,* etc.

5) Write codes for saving the answer into memory and displaying the result on the screen.

CSE Department
6) Write codes for performing various arithmetic operations.

7) Display the possible Error message that can be associated with this calculation.

8) Display the output on the screen else display the error message on the screen.

Program:

CALC.L

%{
#include<stdio.h>
#include<stdlib.h>
void yyerror(char
*); #include
"y.tab.h" int yylval;
%}
%%
[a-z] {yylval=*yytext='&'; return VARIABLE;}
[0-9]+ {yylval=atoi(yytext); return INTEGER;}
CALC.Y
[\t] ;
%%
int yywrap(void)
{
return 1;
}
%token INTEGER VARIABLE
%left '+' '-'
%left '*' '/'
%{ int
yylex(void); void

CSE Department
yyerror(char *);
int sym[26]; %}
%%
PROG:
PROG STMT '\n'
;
STMT: EXPR {printf("\n %d",$1);}
| VARIABLE '=' EXPR {sym[$1] = $3;}
;
EXPR: INTEGER
| VARIABLE {$$ = sym[$1];}
| EXPR '+' EXPR {$$ = $1 + $3;}
| '(' EXPR ')' {$$ = $2;}
%%
void yyerror(char *s)
{
printf("\n %s",s);
return;
}
int main(void)
{
printf("\n Enter the
Expression:"); yyparse();
return 0;
}
Output:
$ lex calc.l
$ yacc -d calc.y
$ cc y.tab.c lex.yy.c -ll -ly -lm
$ . / a . out
Enter the Expression: ( 5 + 4 ) * 3 = Answer: 27

CSE Department
Experiment-4

Using JFLAP, create a DFA from a given regular expression

Aim: Using JFLAP, create a DFA from a given regular expression. All types of error must be
checked during the conversion.

What is JFLAP: -

JFLAP program makes it possible to create and simulate automata. Learning about automata
with pen and paper can be difficult, time consuming and error-prone. With JFLAP we can create
automata of different types and it is easy to change them as we want. JFLAP supports creation of
DFA and NFA, Regular Expressions, PDA, Turing Machines, Grammars and more.

Setup: -

JFLAP is available from the homepage: (www.JFLAP.org). From there press “Get FLAP” and
follow the instructions. You will notice that JFLAP have a .JAR extension. This means that you
need Java to run JFLAP. With Java correctly installed you can simply select the program to run
it. You can also use a command console run it from the files current directory with, Java –jar
JFLAP.jar.

Using JFLAP: -

When you first start JFLAP you will see a small menu with a selection of eleven different
automata and rule sets. Choosing one of them will open the editor where you create chosen type
of automata. Usually you can create automata containing states and transitions but there is also
creation of Grammar and Regular Expression which is made with a text editor.

DFA from a given regular expression: -


First, we need to select Regular Expression from the JFLAP Menu.

CSE Department
Now you should have an empty window in front of you. You will have a couple of tools and
features at your disposal.

The toolbar contains six tools, which are used to edit automata.

CSE Department
Attribute Editor Tool, changes properties and position of existing states and transitions.
State Creator Tool, creates new states.
Transition Creator Tool, creates transitions.
Deletion Tool, deletes states and transitions.
Undo/Redo, changes the selected object prior to their history.
Regular Expressions can be typed into JFLAP to be converted to an NFA

Choose Regular Expression in the main menu, then just type the expression in the textbox.
Definitions for Regular Expressions in JFLAP:

➢ Kleene Star
➢ + Union
➢ ! Empty String

Correctly written expressions can then be converted to an NFA. To convert your expression
select
Convert → Convert to NFA. The conversion will begin with two states and a transition with your
Regular Expression. With the (D)e-expressionify Transition tool you can break down the
Regular Expression into smaller parts. Each transition will contain a sub expression. The next
step is to link every rule with lambda transitions. Add new transition between states that should
be connected with the Transition Tool. If you are unsure what to do you can select Do Step to
automatically make the next step. If you want the NFA immediately Do All creates the whole
NFA for you.

CSE Department
You can notice how the conversion differs depending on how the Regular Expression looks. For
example the expression a+b results in a fork, were either ‘a’ or ‘b’ can be chosen.

Now finally convernt NFA to DFA:-

CSE Department
CSE Department
Experiment-5

Create LL(1) parse table for a given CFG and hence Simulate LL(1) Parsing

Aim: Using JFLAP create LL(1) parse table for a given CFG and hence Simulate LL(1) parsing

Implementation: -

Step 1. Choose the grammar from JFLAP and insert grammar you want to create LL(1) parsing
table.

Step 2: - Select the Input from Menu and select Build LL(1) parsing table from it.

CSE Department
Step 3: - Now select parse to use that table to create parse tree from it.

Result:- We create LL(1) parse table for a given CFG and hence Simulate LL(1) parsing

CSE Department
Experiment-6

Create SLR(1) parse table for a given grammar


Aim: Using JFLAP create SLR(1) parse table for a given grammar. Simulate parsing and output
the parse tree proper format.

Implementation: -

Step 1: - Choose the grammar from JFLAP and insert grammar you want to create SLR(1)
parsing table.

CSE Department
Step 2: - Select the Input from Menu and select Build SLR (1) parsing table from it.

CSE Department
Step 3: - Now select parse to use that table to create parse tree from it.

Result :- We created SLR(1) parse table for a given grammar and Simulated parsing and output
the parse tree proper format.

CSE Department
Experiment-7

FIRST and FOLLOW of all the variables


Aim: Write a suitable data structure to store a Context Free Grammar. Prerequisite is to eliminate
left recursion from the grammar before storing. Write functions to find FIRST and FOLLOW of
all the variables. [May use unformatted file / array to store the result].

Algorithm:

First ()-

If x is a terminal, then FIRST(x) = { ‘x’ }


If x-> Є, is a production rule, then add Є to FIRST(x).
If X->Y1 Y2 Y3….Yn is a production,
FIRST(X) = FIRST(Y1)
If FIRST(Y1) contains Є then FIRST(X) = { FIRST(Y1) – Є } U
{ FIRST(Y2) } If FIRST (Yi) contains Є for all i = 1 to n, then
add Є to FIRST(X).

Follow ()-

FOLLOW(S) = { $ } // where S is the starting Non-Terminal


If A -> pBq is a production, where p, B and q are any grammar
symbols, then everything in FIRST(q) except Є is in FOLLOW(B).
If A->pB is a production, then everything in FOLLOW(A) is in FOLLOW(B).
If A->pBq is a production and FIRST(q) contains Є,
then FOLLOW(B) contains { FIRST(q) – Є } U
FOLLOW(A)

Program:

#include<stdio.h>
#include<math.h>
#include<string.h>
#include<ctype.h> #include<stdlib.h>
int n,m=0,p,i=0,j=0; char
a[10][10],f[10]; void follow(char c);
void first(char c); int main(){ int i,z;
char c,ch; clrscr(); printf("Enter the no

CSE Department
of prooductions:\n"); scanf("%d",&n);
printf("Enter the productions:\n");
for(i=0;i<n;i++)
scanf("%s%c",a[i],&ch); do{ m=0;
printf("Enter the elemets whose fisrt &
follow is to be found:");
scanf("%c",&c);
first(c);
printf("First(%c)={"
,c); for(i=0;i<m;i++)
printf("%c",f[i]);
printf("}\n");
strcpy(f," ");
//flushall();
m=0;
follow(c);
printf("Follow(%c)={"
,c); for(i=0;i<m;i++)
printf("%c",f[i]);
printf("}\n");
printf("Continue(0/1)?
");
scanf("%d%c",&z,&c
h); }while(z==1);
return(0);
}
void first(char c)
{ int k;
if(!isupper(c))
f[m++]=c;
for(k=0;k<n;k
++)
{
if(a[k][0]==c)
{
if(a[k][2]=='$')
follow(a[k][0]); else
if(islower(a[k][2]))
f[m++]=a[k][2]; else
first(a[k][2]);

CSE Department
}
}
}
void follow(char c)
{
if(a[0][0]==c)
f[m++]='$';
for(i=0;i<n;i++)
{
for(j=2;j<strlen(a[i]);j++)
{
if(a[i][j]==c
)
{ if(a[i][j+1]!='\0')
first(a[i][j+1]);
if(a[i][j+1]=='\0' &&
c!=a[i][0]) follow(a[i][0]);
}
}
}
Output:

CSE Department
CSE Department

You might also like