0% found this document useful (0 votes)
46 views43 pages

Compiler 2 PDF

Uploaded by

Jhon cena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
46 views43 pages

Compiler 2 PDF

Uploaded by

Jhon cena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 43
Compilers Introduction: | Compiler is a translator that takes os input the source language that is written in High Level Language and produces as output an assembly language. or machine language. Structure of Compiler: - The process of com pilation is very complex and hence it is partitioned info a series of sub proc esses called phases, Refer Class Notes The € Compiler: C is a general purpose programming language designed by D.M Ritchie and is used as the primary programming ianguage on the UNIX O.S, UNIX itself is written in C and hés been moved to a humber of machines, ranging from mnicro-processors to large main-frames, by first moving aC Compiler. The following is the overall structure of the compiler for PDP-I1 by Ritchie I, Ne The PDP-11 compiler works in two-passes* Pass T of compiler does lexical analysis, syntax analysis and intermediate code generation The PDP-11 compiler uses recursive descent parser everything except expressions, for which operator precedence parser is used, The'intermediate code. consists for postfix notation for expressions and assembly code for control flow statements. The storage allocation for local names is done during the first pass. Pass IT of compiler does code generation and PDP-11 has an optional third pass that does optimization on the assembly language output. A % > iS Interpreter: > An interpreter is a kind of translator which produces the result directly when the source language and data is given to it as input. > The process of interpretation can be carried out in the following phases: © Lexical Analysis © Syntax Analysis © Semantic Analysis o Direct Execution > Examples of Interpreted languages are: BASIC, SNOBOL, LISP. > Advantages: © Modification of user program can. be easily made and implemented as execution proceeds. o Type of object that denotes a variable may change dynamically © Debugging a program and finding errors is simplified task for a program used for interpretation > Disadvantages; o The execution of the program is slower. © Memory consumption is more. Module 7: Lexical Analysis Introduction: > > . ipiler. The lexical analyzer (scanner) is the first phase of a oH a @ eutput« Tts main task is to read the input characters and pro at the Se is, Sequence of tokens that the parser uses for syntax ae op The interaction between lexical analyzer and syntax Source Program token v > summarized as follows: Analyzer Although the task of the Scenner is Program into a sequence of tokens, the conce, S ‘to convert the entire source Scanner will Ferely do this all at The lexical analyzer is a subroutine of the Parser i.e," Cperate under the control of The parser, Py pon receiving a “get next token" command fro m the parser, the lexical analyzer reads input characters until it can identify the next token, Scanner will Other functions performed by Lexical Analyzer are: > Removal of comments Case conversion 7 Pemoval of white spaces Communication with” symbol table ie. storing regarding an identifier in the symbol table. Regular Expressions are used to specify the tokens, The advantage of using Regular E the tokens, called Finite Automator information “pression is that a recognizer for n could be easily constructed, > Two classes of Finite Automaton are: 1) Non Deterministic (NFA) 2) Deterministic (DFA) Regular Expression NFA with Null transitions / WA witout Null transitions (refer TCS notes for this topic) R.E. > NFA > DFA > Minimized DFA ‘S 2, Ys SF POS ; a 4 in Compiler Design: ao Role of Finite State Automata (FA) in Comp! : ° ‘ input the source language that is ier i that takes as inpu ae written in ee i ical Analysis. > sees of the compilation process is Gane een ee is i f recognizing to > Lexical Analysis is a process of recognizing wer source program. - . > The block diagram for this process is shown below Input Buffer \ Lexical Analyzer Tokens > While constructing the lexical analyzer we first design the regular expression for the Token———— — oN a > A method to convert a regular expression into its Fecognizer is to, construct a generalized transition diagram from that expression is called Finite Automata, > Finite Automata (FA) is a recog input a string x and answers " otherwise, ‘Two classes of Finite Automaton are: © Non Deterministic (NFA) © Deterministic (DFA) Ata. zx -—— nizer for a language L +I hat takes as yes" if x is a sentence of L and “no" v vy y Example to illustrate the above concept: re, for an identifier : r= letter (letter | digit)* delimiter transition diagram for an identifier: letter or digit The starting state of the transition diagram is state 0, the edge from which indicates that the first input character must be a letter, Tf this is the case, we enter statel and look at the next input character. If that is a letter or digit, we re-enter state 1 and look at the input character after that. We continue this was, reading letters and digits and making transitions from statel to itself, until the next input character is a delimiter for an identifier, On reading the delimiter we enter state2. State2 indicates an identifier has been found and hence-a token is generated and returné parser. Related University Questions: May 2010 1) iii) | Explain the role of the Finite Automata in Compiler Theory. | (5) defer Printed Notes) Dec 2010 (5) [?? pain the role of Finite State Automata in Compiler fesign. (Refer Printed Notes) [5)¢) Explain role of Lexical Analyzer. (Refer Printed Notes) (5) oO Module 8 Syntax Analysis ) Introduction: Syntax Analyzer analyzes the syntactic structure of the program and its components and checks for the errors. Parser (Syntax Analyzer): Parser for grammar G is a program that takes as input a string w and produces as output either a parser tree for w, if w is a sentence of G or an error message indication that w is not a sentence of G. Parsing Strategies: 1) Top-Down Parsing: + The parser builds the parse tree from top to bottom ie. from root to the leaves. + Uses Leftmost Derivation + Backtracking is required Examples: Y Recursive Decent Parser Y Predictive Parser (LL(1)) 2) Bottom-Up Parsing: + The parser builds the parse tree from bottom to top i.e. from the leaf to the root + Uses Right Derivation + No Back tracking is required Examples: Y Shift Reduce Parser Y Operator Precedence Parser Y Simple LR Parser (SLR) Y Canonical Parser (Canonical) v Lookahead LR Parser (LALR) S Fs rs gs Top down Parsing: I. & instruct a parse Y ae -up because it attempts to co parse Vy ; This parsing method is bottom-up eee ¢ tree for an input string beginning at the leaves ( p $ towards the root (the top). Recursive-Descent Parser: oe : A parser that uses a set of recursive procedures to recognize its input with no backtracking is called a recursive-descent parser. Predictive Parser: : Predictive parser is an efficient way of implementing recursive~ parsing by handling the stack of activation records explicitly. ne Bottom Up Parsing: descent This parsing method is bottom-up because it attempts to construct a parse tree for an input string beginning ct the leaves (the bottom) and working up towards the root (the top). Shift Reduce Parser: A convenient way to implement a shift-reduce parser is to use a stack and an input buffer. We use $ to mark the bottom of the stack and the right end of the input. Operator Precedence Parser: In operator precedence parsing, three disjoint precedence relations < , = and > between pairs of terminals. These precedence relations guide the selection of handles. LR Parsers: ER parsers scan the input from Left to Right and construct a right most derivation in reverse, LR parsers are attractive because of the following reasons: LR parsers can be constructed to recognize virtually all programming language constructs for which CFG can be written, LR parsing method is more common than operator precedence or any of the other common shift-reduce techniques as discussed. LR parsers can detect syntactic errors as soon as.it is possible to do So ona left-to-right scan of the input. Step 1: Augment the given grammar: Tf G is a grammar with start symbol S, the 6’ then the augmented grammar for G, is G with a new start symbol S' and production S' > S. The purpose of this new starting production is to indicate to the parser when it should stop parsing and announce the acceptance of the input. This would occur when the parser was about to reduce by S' > S. Step 2: Constructing Canonical Collection of LR (0) Items: procedure CLOSURE(Z): begin repeat for each item A ~> a.B and each production B> yin such that B >. T is not in I do add B> y toT Until no more items can be added to I; return I: end procedure GOTO(L,X); begin let J be the set of items [A > aX.B], such that [A> aXDJis in I: return CLOSURE(J); end; SS —— ~ procedure ITEMS(6): begin i= (CLOSURE ({S>.S})}: repeat a for each set of items T in Cand each grammar symbol X such that GOTO(LX) is not empty and is not in C do add GOTO(LX) to C until no more sets of items can be added to C. end L. Fig: The sets of items Construction Step 3: Constructing of an SLR parsing table: Input: C,.the canonical collection of sets of items for an augmented grammar 6 Output: Tf possible, an LR parsing table-consisting of a parsing action function ACTION and a goto function GOTO. Method: Let C={ Io, hi,... , In}. The states of parser are 0, 1, ..,n, state T being constructed form I, The parsing actions for state T are determined as follows: 1) If [AD aa] is in Tj, and GOTOC, a) = 1, then set ACTION(i.) to “shift j" here ais terminal. 2) If [A> a] is in Ij, then set ACTION[i,a] to “reduce A> a" for all a in FOLLOW(A). 3) If [S'> S.Jis in I, the set ACTION [i,$] to “accept” if any conflicting actions are generated by the above rules, we say the grammar is not SLR(1). The algorithm fails to produce a valid parser in this case. The goto transitions for state i are constructed using the rule: \ ee 4, If GOTO(I,, A) = Tj, the GOTO [i, A]= j 5, All entries not defined by rules(l) through (4) are made "error", 6. The initial state of the parser is one constructed from the set of items containing [S' > .S] Step 4: Example of moves of LR parser: (Refer Class Notes) é 2 Canonical LR Parser: of ’ 5 : ss 2 At nt the given grammar / Step 2 G peau ca start symbol 5, the 6' then the augmenteg m6, bol S' and production S'> s, 1 grammar for G, is G with a new start ayn al The purpose of this new starting production i bas when it should stop parsing and announce the acceptance of the input. This would occur when the parser was about to reduce by S’> Ss. ‘0 indicate to the Step 2: Constructing Canonical Collection of LR(1) Items: i {A> BB , a] inT and each production 8> y, and each terminal b in FIRST(Ba) such that [B 3.F, b]is not in I deadd BS ytoE until no more items can be added to I: end procedure COTOEX): begin let J be the set of items [A> aX, a], such that [A> aXB, a] is in I: return CLOSURE(J): end: procedure ITEMS(6): begin G= (CLOSURE ({ S>.S},$)}: repeat for each set of items I in C and each grammar symbol X such that GOTOEX) is not empty and is not inC do cdd GOTOEX) to C until no more sets of items can be edded to C. end Fig: The sets of items Construction Step3 + Constructing of a canonical LR parsing table: Ingut: Output: Method: A grammar G augmented by production S'S, Tf possible, « canonical LR parsing table consisting of « parsing action function ACTEON and a goto function GOTO, Let C= (To, Ti, a, In}, The states of parser are 0,1, n,n, tate T being constructed form Li. ‘or slate Lare determined as follows: The parsing actions 1) If [A> aap, b)] is in 1, and GOTO(K, a) = Ty, then ser here ais terminal, 2) If {A> a, a] is in, then set ACTLON(i,a) to “reduce A> a", 3) If [SD S., $Jis in, the set ACTION [ih] to “accept” if any conflicting actions are generated by the above rules, we say the grammar is not LR(1). The algorithm fails to produce a valid parser in this case. ‘The goto transitions for state,i are constructed using the rule: 4, If GOTO, A) = Ij, the GOTO [i, A]=j 5. All entries not defined by rules(1) through (4) are made “error”. 6. The initial state of the parser is one constructed from the set of items containing [S'> .S,, $] LALR Parser: (Refer Class Notes) Top Down Parsing Top Down Parsing 4 ey, V/S Bottom Up Parsing Bottom Up Parsing (LR Parsing) (LL parsing) 1] this Feebnigues ‘ const: of pars détree begin: ak tne ret and wor devten towards the leawes yy qeohnrgue th al > Eh Ps ieee tree pays bbe lees oe pores Gp dowards tne roots = - aor leapt reaursron © mok In many compilers the source code is translated into a language which is intermediate in complexity between a high level) programing language and machine code. | . > ‘Such a language is therefore called intermediate code or intermediate text > The following are the intermediate codes are of ten used: © Postfix Notation © Syntax Trees © Directed Acyclic Graph (DAG) © Three Address Code Form (3ACF) Module 9: Intermediate Code Generation (Zeey a Postfi: a » The ordinary (infix) way of writi Operator in the middle: a+ b. The postfix notation for the right end, as abs. In general, if e1 and e2 are any postfix expressions and operator, the result of applying © to the values denoted indicated in postfix notation by e1e20 ing the sum of a and b is with the oe earreaDat v the same expression places the operator at v 8 is any binary by e:and ee is Eg: ES E% pe® E> id Infix Postfix Advantage: No parentheses are needed in postfix notation because the position and arity (number of arguments) of the operators permits w aryee*" abot WY P) Write the postfix notation for the following: 1) Infix atb*c Postfix abe + enfin *b aa. avbre lake) - Postfix ab. ot > 3) infix a+rb*ce-d Postfix abe * xd- ayinfix a*b+e/d Postfix Ab * cd/+ 5)yInfix ax (o+e/4) Postfix albcd/+ * coming ee Precedence 7 Stacte te ater then 1st pep, Tee push atec* —d: etapcttrdaa- 6)Infix a” G sold es postfix abct* a / <= Syntax Tree: Parse HE Jargueg? represemiction for > The parse tree is a useful intermedia? 4 presen source program, especiaily in oprimizing compilers i fructures. intermediate code needs 10 be extensively resttucs ents A parse tree often contains redundant ingormaricn n con be eliminated, thus producing a more geonomical represen source program. ; | > Syntax tree is a tree in which leaf represents 6h nd end ecch interior node an operation. rv Tez v vin peak — S&F Fe ah _ pion - tery Parse tree Syntex tree > Advantage: Redundant information in the parse tree is avoided in the syntax tree. Parse Tree i ‘Syntax Tree 1 Node represent Wor | wedes ehoseio ees (eaves lane ae ns Erector ee terminals. epen ae y p Sent 2 [eaniains o tet of Dees het centunany redundant jnfa- 4 here nedundeurct (nfo nw not compact lence compact 7 Barely used asa | ceidely csect For data strocture vegeming & preg: May tree stricture | : 1 Represents The GNCeH] Repaesents abswreur Sunt { ; 4 1 Of a pd \>a x 4 a Pry Hondting COT YN OM aub- ex DOS OV, er 2° Directed Acyclic Graph (DAG): > DAG for an expression identifies the common subexpression in the expression. . — j > Like, syntax tree, a DAG has a node for every subexpression of the expression, an interior node represents an operator and its children represent its operands. — > The difference is that’ a node in a DAG representing a common subexpression has_more“than oné “parent” om Syntox_ tree, the ‘commorrsabexpression would be represented as a duplicated subtree. 20? aN hoy pebee te ba. f oF és Ly bc ee ae Syntax tree DAG > Advantage: Common subexpression can be handled more efficiently. Draw the Syntax tree and DAG for the following: ry ax Ca-b + Cc* ca-b) Syntax Tree: /) At ARCb-Cd Flb-CO*K d Syntax Tree ae ce: om +b pa) Caxb) +6 Syrtox tree Bilkoas Three-address code: v > Three address code is a linearized representation of a.syntax tree or a DAG in which explicit names correspond to the interior nodes of the graph. == > Three ~address code is a sequence of statements of the form A= BopC, where A, B and Care either programmer-defined names constants or ” compiler-generated temporary Rame and op stands for any operator. —=_—rre The reason for the name “three-address-code’ is that each statement usually contains three addresses, two for the operands and one for the result. a _ Eg: a az(e-b) « Cet) Hs ob 42 = c+d toe KL a= +2? HLL statement 3ACF Types of Three-address Statements: The following are the different three-address statements 1, Assignment statements of the form x= y Px wihere, op is a binary arithmetic or logical operation 2. Assignments instructions of the form x = op Z, where, op is a unary operation, 3. Copy statements of the form x= y where the value of y is assigned to x. Implementation of Three Address Statements: KE, > The three address statement is an abstract form of intermedias,, code. In actual compiler these statements can be implemented in one of the following ways: © Quadruples o Triples Quadruples: . > Quadruple is a record structure with four fields, oP, ARG!, ARG2 and RESULT. ——————— > > The contents of fields ARG1, ARG2 and RESULT are normally pointers to the symbol-table_entries_for the names represented by these fields, —=—= Eg: 3ACF ~ » Quadruple tl=e-b OP ARG “ARG2 Rest t2=c+d 19h , 7 t3=t1*t2 ae c bs. Fi a=t3 te es d 7a i a eI 42° 43 oR = 3 ont “a Advantage: Simple to Implement. Disadvantage: Requires a lot of temporary variables. © Triples: > To avoid entering temporary names into the symbol table, one can allow the statement computing a temporary value to represent that value, > Tf we do so, three-address statements are representable by a structure with only three fields OP, ARG1 and ARG2, the arguments of OP, are either pointers to the symbols table (for programmer defined names or constants) or pointers into the structure itself ( for temporary values). > Since three fields are used, this intermediate code format is known as triples. re > Parenthesize numbers are used to represent pointers into the triple structure, while symbol-table pointers are represented by the names themselves. Eg 3ACF Triple tl=e-b op ARG AR t2=ct+d f ne loo _ es t3=t1* t2 ° bb a=t3 lol a= e \ od 7 NG K Cteed Cro) Ne ~ & Cto2) Advantage: Does not require any temporary variables, Disadvantage: Complex to implement. ts: Some Problems based on three address statemen El) a=b*c+d* (f-2): Solution: 3 address statements joo tt eh eT A io( 422 be em (oz to> Aatia [oS +s 4243 ™ jo & a= th NN Quadrup le— Op PRE, prReye (0g) ray 4 _ tol fe b 7 lo2 # 4 7 “ r + + (oY = mo . f (r<4) then Fol 100 ees we yor . 2 t= CH P2) if < y) iy ere ete a=b+c*3; wy AS EB | ‘ y we else 4 tye Peqtr: pers ob = Solution: jot $ 2B addeess statements. tee FF Cu A 3 < lot t>4) g ror 102 Le (iced then goto 104 vy 102 goo toy 422%) PD Adbile Ca< 8 do (FCed +hén Me YS else x2 y-t- loo ff (a< Bo gore 102 to) eto 10 loo. HE Ce Syntax Directed Translation is a notational _framework. for intermediate code generation which allows subroutines or "semantic actions" to be attached to the productions of the context-free grammar. > Syntax directed translation is useful because it enables the compiler designer to express the generation of intermediate code directly in terms of the syntactic structure of the source language. Semantic Actions: us” Syntax directed translation scheme is a context free-grammar in which a program fragment called an output action (Semantic action or 3° prog it OF O° semantic rule) is associated with each production. es > The output action may involve the computation of values for veriable P belonging to the compiler, the gener of intermediate code, the printing of an error diagnostic or the placement of some value in a table. errr peacaeEn Implementation of Syntax Directed Translators: > A Syntax Directed translation scheme is a convenient description of what we would like to be done. > The output defined is independent of the kind of the parser used to construct the parse tress or the kind of mechanism used to compute. the translations Se & Oo / / Syntax Directed Construction of Syntax Tree: we postfix code, it is easy to define either a parse tree or a syntax tree in jerms of Syntax Directed Translation scheme. EOE, +T| (Eptr= mknode (‘+’,E, ptr, T.ptr)} EoT {E.ptr = T.ptr} TOT,*F| {Tptr= mknode (‘*’,T,.ptr, F.ptr) TOF {T.pur = F.ptr} Fo id {F.ptr = mkleaf (id.place)} > where ptr is pointer value attribute used to link the pointer to anode in the synfax tree, and place is pointer value attribute used to link the pointer to the symbol record that contains the name of the identifier. > The mkleaf generates leaf nodes, and mknode generates intermediate nodes. Le Stack Soida Input id+id*id$ Shiftid +id*id$| Reduce F > id +id*id$| ReduceT>F Actions OF3 $oT2 +id*id$| ReduceE >T SOE1 +id*idS Shift + SOE1+5 id*id$ shiftid SOEL+S5id4 *idS| Reducer > id SOEL+5F3 *idS| ReduceT>F SOE1+5T7 *id$ Shift * SOE14+5T7*5 ids Shiftid SOE14+5T7*6ida $| Reduce > id SOE1+5T7*6F8 SOEL+5T7 ReduceT > T*F SOEL $ $|ReduceE > E+T $ Accept Stack Input id* id +id Actions $ shiftid *idtid$| ReduceF id po OS ors aia | Reduce | *idtid$| ReduceT>F vid+id S shift * - $0T2*6 id+idS| —_Shiftid id |+ |* |$ = wale $0T2*6id4 +idg| Reduce F > id t : $0T2*6F8 +id$ | ReduceT>T*F 1 SS Acc $oT2 +id$| Reduce=>T R236 | R2 $OE1 +idS Shift + 13 rare | Ro SOE1+5 id$ shiftid |4 5 [R5| RS. S0E1+5id4 $| ReduceF > id Jenteal | SOE1+5SF3 S| ReduceT>F S51 SA [eoe1+517 $ ReduceEDE+T (7 Ra | S6 | R21 [goed $s] Accept “suv 1 Module 11: Code Optimization Introduction: The code optimization phase attempts to improve the intermediate so that a faster running machine could be generated. Code Optimization is performed to: * Minimize the time taken to execute a program. * Minimize the amount of memory occupied. a Machine Dependent Optimization: 1). Machine instructions that use registers as operands must be selected since such instructions are faster that the instructions that refer to locations in memory. , 2) Special control instructions or addressing modes could be used to generate efficient object code. 3) Consecutive instructions that involve different functional units could _—_____—_. be executed at the, same time, Machine Independent Optimization: There are two classes of Machine Independent Optimizations (Sources of optimization) 1. Local Code Optimization (Function preserving transformations) 2. Global Code Optimization (Loop Optimizations) x J Local Code Optimization: Vv" Local Optimization performs oprniztin eal te function definition Therefore itis also called Function Preserving Transformation, There are number of ways in which a compiler can improve a program without chenging the function it computes. Examples of Function bresening fenton ate + Algebraic Simplifications * Constant Folding « Coristant Propagation ° Common Sub-expressign Elimination % Dead Code Elimination Algebraic Simplifications: * Some statements can be deleted while some can be simplified, + Eg: Before Optimization After Optimization Egl: Con be cleleted Eg2: x= k* 4; Con be deleted Eg3: x= x*0; Can be simptted Constant Folding: * Operations on constants can be computed at compile time. + Expressions values can be pre-computed at the compilation time, hence requiring less execution time ges exccuTion time. . Eg ae Before Optimization After Optimization Egl: area = (22.0/7.0)*r*r; Fareg =3'14 286 rey. Eg2: a=3*2; : Constant Propagation 2 + Repl i . place a variable with constant which has been assigned 10 it earlier. * Advantage : cote + Smaller code and Fewer registers, After Optimization Before Optimization PL = 3.14286; area = PI*r*n; OTtan WIN IAL, EOHT” Common Sub-expression Elimination: + Identify common sub-expression present in dif compute once, and use the result in all the places. + Comimon Sub expressions would be evaluated once ond not repeatedly. Eg: ferent expression, _After Optimization Before Optimization emp pee, xeb*co+5; a= 1DA- vemp . : Db: xetemp + 55 Dead Code Elimination: Dead Code are portion of # any path of the program. + Eg: fhe program which will not be executed in Before Optimization After Optimization DEBUG = false: if (DEBUS) { pesua »false. a ; oe imization): 8s Loop Optimizations: (Global Code Optimiza proved if we decrease the number of \ ing ti be iy "o The running time of a program ca ee cade aufaide instructions inn inner loop, even if we increase the amov atop Exar =xamples of Loop Optimization techniques aret > Code Motion > Strength Reduction > Loop Fission or Loop Distribution > Loop Unwinding Code Motion + Moving code from one part of the program to other without modifying the algorithm. + Advantage: Reduce execution frequency of the code subjected to movement, © Ey After Optimization Before Optimization ta max * 2S - while (i < (max * 25)) cohile Cle+) { ) Strength Reduction «Replacement of an operator with a less costly one ‘Multiplication or division by powers of two can be replaced by adds or shifts. Advantage; Replacing operations that require more number of CPU cycles with the ones that require less number of CPU cycles will lead toa fast running code. © Eg Before Optimization After Optimization rl =r2x2: rl e7242, 6p He rae rl =r2/2; 4 i; We voss), ? y \ Whar Loop Fiss; de oon or Loop Distribution: °P Fission attempt: : index p Pts to break a loop into multiple loops oven the same Thee but each taking only « part of the Toop'sbody. = eect prove locality of reference, both of the data being a In the loop and the code in the loop's body g: ; After Optimization Before Optimization int, aCtooy, bLiedd 2 int i, a[100}, b[100]; For Cl=0; 11005 {4249 for (i = 0; i< 100; i++) { ati) = 1; aie bli} = 2; Hap } . . r Fer Cleo. Seto. p> > , Loop Unwinding: 4 * Loop unwinding, also known as loop unrélling, is a loop transformation technique that attempts to optimize a program's r } / execution speed at the expense of its binary size Seg Before Optimization After Optimization int x; Int x, fee Oh 61 a) for (K>0, X where code is a string value attribute used to hold the postfix expression, and place is pointer value attribute used to link the pointer. ° to the symbol record that contains the name: of the identifier. > The label getname returns the name. of the identifier from the symbol table record that is pointed to by ptr, and concate(si, 52, 53) returns the concatenation of the strings si, Se, and 53, respectively. P) Design LR(O) Parser for the following: ESE+ TIT TO T*FIF F > id Solution: S1: Augment the given Grammar: el—> € I) G—> E+T 22 E€—>T 1 ZBITSTHE i er SD)F—e1d $3): Design LR(O) Parser Table: Actions Goto idf + [| * | $e | TIF o |s4 4m e2m|2, 1° bs feo a ea) 3@ |CY 8 Spy [or | ry 4 yg | ys | rS E 5 sh + / 3: 6 | sy 2 | 7 n_|so | 8 v3 [ra r3 Follow (E’) ={5} Follow(E) = {+, $} Follow(T) = (+, *, $} Follow(F) ={+, * $} Ss y tack Input. - ; (fone | id vid id $| site 4 —_ > ata aie a Actions Ae ° oT 2 trdxtd 4 [Reduce E>T oe 4 tiderdg¢ snife + cel +s Vd 4 td ¢ shift te OE +5faq Ridg [Reduce F->Id I$OEG | +5ea * id 3 [Reduce TF doe\ +572 rd g¢ | shite % lf OC +S THad° ta shift td [bo 6 1+ STH edd $ [Reduce Eid f CEL +eT THF —$ |Redmee TETRE. Soe liste g_|Reduce ESELT ZOEY 3 | ovat | iReode= gemma pce Soid4 apt _ ations — | stack ¥id+id$| ReduceF >id Input Actions | wid+id$| ReduceT>F [sors $072 vid+id$ Shift * [sorave | EE Shiftid SoT2s 6144 id$| ReduceF Did 0 |s4 Or2t6F8 +id$| ReduceTDT*F 1 $5 ‘Acc sot.” +id$| Reduce’—>T 2 |) [Ra | se] R2 $0E1 +id$ Shift+ 3 Ra | Ra] R4 SOE1+5 id$ shiftid 4 RS RS] RS. TVgoei+sida my $| Reduce F > id al S0E1+5F3 : | ReduceT > F ibe: SOf1+517 $| ReduceE > E+T 7 R1| s6 | RL $0E1 $ 8 RB] RB RB Ecodes db cH pa fEcede=c-] ear]

You might also like