0% found this document useful (0 votes)
50 views84 pages

Unit 1 Introduction: Cocsc14 Harshita Sharma

The document provides an overview of language processors including compilers, interpreters, and assemblers. It discusses the compilation process which involves lexical analysis, syntax analysis, semantic analysis, code generation, and optimization. Interpreters translate code line-by-line while compilers translate the entire program. Assemblers convert assembly code to machine code. Just-in-time compilers optimize bytecode during execution. Language processors play a crucial role in software development by translating human-readable code, enabling cross-platform use, and aiding in code quality.

Uploaded by

Vishu Aasliya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views84 pages

Unit 1 Introduction: Cocsc14 Harshita Sharma

The document provides an overview of language processors including compilers, interpreters, and assemblers. It discusses the compilation process which involves lexical analysis, syntax analysis, semantic analysis, code generation, and optimization. Interpreters translate code line-by-line while compilers translate the entire program. Assemblers convert assembly code to machine code. Just-in-time compilers optimize bytecode during execution. Language processors play a crucial role in software development by translating human-readable code, enabling cross-platform use, and aiding in code quality.

Uploaded by

Vishu Aasliya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

UNIT 1 INTRODUCTION COCSC14

Harshita Sharma
OVERVIEW
Language processors
The compilation process
Types of compilers
compiler-construction tools
applications of compiler technology
Transition diagrams
Bootstrapping
just-in-time compilation.
LANGUAGE PROCESSORS
DEFINITION AND PURPOSE
Language processors are essential software tools that facilitate the
translation of human-readable source code into machine-
executable code or perform various language-related tasks.
They play a crucial role in bridging the gap between human
understanding and machine processing.
CATEGORIES

Compiler • a compiler is a program that can read a program in one language -


the source language - and translate it into an equivalent program in
another language - the target language

• An interpreter is another common kind of language processor.


Interpreter Instead of producing a target program as a translation, an
interpreter appears to directly execute the operations specifed in
the source program on inputs supplied by the user.

Assembler • An assembler is a language processor that converts assembly


language code into machine code or binary code that a computer's
central processing unit (CPU) can execute directly.
COMPILER
Compiler: A compiler is a language processor that translates the entire source code
of a high-level programming language into an equivalent intermediate code – target
program.
The compilation process involves several stages, such as lexical analysis, syntax
analysis, semantic analysis, intermediate code generation, code optimization, and
code generation.
Once compiled, the generated code can be executed independently of the original
source code.
If the target program is an executable machine-language program, it can then be
called by the user to process inputs and produce outputs.
INTERPRETER
Interpreter: Unlike a compiler, an interpreter translates the source code line-by-line or
statement-by-statement and executes it immediately without generating an
intermediate code or a separate executable.
The interpreter reads the source code, performs lexical and syntax analysis, and
executes the corresponding actions for each statement in real-time. This approach is
often preferred for scripting languages and provides flexibility and ease of
debugging.
Instead of producing a target program as a translation, an interpreter appears to
directly execute the operations specified in the source program on inputs supplied by
the user
ASSEMBLER
Assembler: An assembler is a language processor used in the translation of assembly
language code into machine code. Assembly language is a low-level programming
language that represents instructions using mnemonics and symbols rather than binary
codes.
Assemblers facilitate the conversion of human-readable assembly code into machine
instructions that a computer's processor can execute directly.
JUST IN TIME COMPILERS
Just-In-Time (JIT) Compilation: JIT compilation has gained popularity, especially in
virtual machines and runtime environments. It involves translating code at runtime to
machine code, allowing for dynamic optimizations and adaptive execution based on
the runtime context.
Example: Java language processors combine compilation and interpretation. A Java
source program may first be compiled into an intermediate form called bytecodes.
The bytecodes are then interpreted by a virtual machine. A benefit of this
arrangement is that bytecodes compiled on one machine can be interpreted on
another machine, perhaps across a network. In order to achieve faster processing of
inputs to outputs, some Java compilers, called just-in-time compilers, translate the
bytecodes into machine language immediately before they run the intermediate
program to process the input.
SIGNIFICANCE OF LANGUAGE PROCESSORS
Translation: The primary purpose of language processors is to translate high-level
programming languages, such as Java, C++, Python, or others, into machine-
executable code or intermediate code. This translation enables computers to
understand and execute the instructions written in human-readable form, facilitating
the creation of a wide range of software applications.
Code Optimization: Language processors, especially compilers, often perform code
optimization to improve the efficiency and performance of the generated machine
code. Various optimization techniques are employed to reduce execution time,
minimize memory usage, and enhance overall program efficiency. Code optimization
contributes to faster and more reliable software execution.
Platform Independence: With the help of language processors, developers can write
code in a high-level programming language once and then compile it for different
target platforms or architectures. This capability allows for cross-platform
development, where a single codebase can be used to deploy software on multiple
devices and operating systems, saving time and effort.
Error Detection and Reporting: Language processors assist in detecting syntax errors,
semantic errors, and other issues in the source code. These tools provide valuable
feedback to developers, enabling them to identify and correct errors before the
code is executed. This aids in improving the overall quality and reliability of software
applications.
Language Extensions: Language processors can introduce language extensions or
domain-specific languages (DSLs) to cater to specific application domains. DSLs
provide a more expressive and concise way to describe certain tasks, making it
easier for developers to address complex problems in specific areas.
Scripting and Rapid Prototyping: Interpreters, a type of language processor, are
popular for scripting languages and rapid prototyping. They allow developers to
write code in an interactive manner, providing immediate feedback and easy
debugging. This makes them ideal for quick experimentation and prototyping of new
ideas.
Language Evolution: Language processors play a significant role in the evolution of
programming languages. As new language features or paradigms are proposed and
accepted, language processors must be updated to support these changes. This keeps
programming languages relevant, modern, and aligned with the needs of developers
and the computing industry.
THE LANGUAGE PROCESSING SYSTEM
THE COMPILATION PROCESS
PHASES OF A COMPILER
ANALYSIS PART
The analysis part breaks up the source program into constituent pieces and imposes a
grammatical structure on them.
It then uses this structure to create an intermediate representation of the source
program.
If the analysis part detects that the source program is either syntactically ill formed
or semantically unsound, then it must provide informative messages, so the user can
take corrective action.
The analysis part also collects information about the source program and stores it in a
data structure called a symbol table, which is passed along with the intermediate
representation to the synthesis part.
SYNTHESIS PART
The synthesis part constructs the desired target program from the intermediate
representation and the information in the symbol table. The analysis part is often
called the front end of the compiler; the synthesis part is the back end.
Some compilers have a machine-independent optimization phase between the front
end and the back end.
The purpose of this optimization phase is to perform transformations on the
intermediate representation, so that the back end can produce a better target
program than it would have otherwise produced from an unoptimized intermediate
representation.
Since optimization is optional, one or the other of the two optimization phases shown
in Fig. may be missing
LEXICAL ANALYZER/SCANNER
The lexical analyzer reads the stream of characters making up the source program
and groups the characters into meaningful sequences called lexemes.
For each lexeme, the lexical analyzer produces as output a token of the form:
<token-name, attribute-value>
that it passes on to the subsequent phase, syntax analysis.
In the token, the first component token-name is an abstract symbol that is used during
syntax analysis, and the second component attribute-value points to an entry in the
symbol table for this token. Information from the symbol-table entry is needed for
semantic analysis and code generation.
EXAMPLE : POSITION = INITIAL + RATE * 60
Lexical Analysis:
1. position is a lexeme that would be mapped into a token <id, 1>, where id is an
abstract symbol standing for identifier and 1 points to the symbol table entry for
position. The symbol-table entry for an identifier holds information about the
identifier, such as its name and type.
2. The assignment symbol = is a lexeme that is mapped into the token <=>. Since this
token needs no attribute-value, we have omitted the second component. We could
have used any abstract symbol such as assign for the token-name, but for notational
convenience we have chosen to use the lexeme itself as the name of the abstract
symbol.
3. initial is a lexeme that is mapped into the token <id, 2>, where 2 points to the
symbol-table entry for initial.
4. + is a lexeme that is mapped into the token <+>.
5. rate is a lexeme that is mapped into the token <id,3>, where 3 points to the
symbol-table entry for rate.
6. * is a lexeme that is mapped into the token <*>
7. 60 is a lexeme that is mapped into the token <60>
Blanks separating the lexemes would be discarded by the lexical analyzer.
Final output by lexical analyser:
representation of the assignment statement after lexical analysis as the sequence of
tokens:
<id,1> <=> <id,2> <+> <id,3> <*> <60>
In this representation, the token names =, +, and are abstract symbols for the
assignment, addition, and multiplication operators, respectively.
SYNTAX ANALYZER/PARSER
The second phase of the compiler is syntax analysis or parsing.
The parser uses the first components of the tokens produced by the lexical analyzer
to create a tree-like intermediate representation that depicts the grammatical
structure of the token stream.
A typical representation is a syntax tree in which each interior node represents an
operation and the children of the node represent the arguments of the operation.
A syntax tree for the token stream shown previously is shown as the output of the
syntactic analyzer.
SEMANTIC ANALYSIS
The semantic analyzer uses the syntax tree and the information in the symbol table to
check the source program for semantic consistency with the language definition. It
also gathers type information and saves it in either the syntax tree or the symbol
table, for subsequent use during intermediate-code generation.
An important part of semantic analysis is type checking, where the compiler checks
that each operator has matching operands. For example, many programming
language definitions require an array index to be an integer; the compiler must
report an error if a floating-point number is used to index an array.
The language specification may permit some type conversions called coercions. For
example, a binary arithmetic operator may be applied to either a pair of integers or
to a pair of floating-point numbers. If the operator is applied to a floating-point
number and an integer, the compiler may convert or coerce the integer into a
floating-point number.
Suppose that position, initial, and rate have been declared to be floating-point
numbers, and that the lexeme 60 by itself forms an integer. The type checker in the
semantic analyzer discovers that the operator * is applied to a floating-point number
rate and an integer 60. In this case, the integer may be converted into a floating-
point number.
Notice that the output of the semantic analyzer has an extra node for the operator
inttofloat, which explicitly converts its integer argument into a floating-point number.
INTERMEDIATE CODE GENERATION
In the process of translating a source program into target code, a compiler may
construct one or more intermediate representations, which can have a variety of
forms. Syntax trees are a form of intermediate representation; they are commonly
used during syntax and semantic analysis.
After syntax and semantic analysis of the source program, many compilers generate
an explicit low-level or machine-like intermediate representation, which we can think
of as a program for an abstract machine. This intermediate representation should
have two important properties: it should be easy to produce and it should be easy to
translate into the target machine.
EXAMPLE: THREE ADDRESS INSTRUCTION
t1 = inttofloat(60)
t2 = id3 * t1
t3 = id2 + t2
id1 = t3
CODE OPTIMIZER
The machine-independent code-optimization phase attempts to improve the
intermediate code so that better target code will result.
Usually better means faster, but other objectives may be desired, such as shorter
code, or target code that consumes less power.
For example, a straightforward algorithm generates the intermediate code using an
instruction for each operator in the tree representation that comes from the semantic
analyzer.
A simple intermediate code generation algorithm followed by code optimization is a
reasonable way to generate good target code. The optimizer can deduce that the
conversion of 60 from integer to floating point can be done once and for all at
compile time, so the inttofloat operation can be eliminated by replacing the integer
60 by the floating-point number 60.0. Moreover, t3 is used only once to transmit its
value to id1 so the optimizer can transform (1.3) into the shorter sequence.
t1 = id3 * 60.0
id1 = id2 + t1
CODE GENERATOR
The code generator takes as input an intermediate representation of the source
program and maps it into the target language.
If the target language is machine code, registers or memory locations are selected
for each of the variables used by the program.
Then, the intermediate instructions are translated into sequences of machine
instructions that perform the same task.
A crucial aspect of code generation is the judicious assignment of registers to hold
variables
LDF R2, id3
MULF R2, R2, #60.0
LDF R1, id2
ADDF R1, R1, R2
STF id1, R1
The first operand of each instruction specifies a destination. The F in each instruction
tells us that it deals with floating-point numbers.
SYMBOL TABLE MANAGEMENT
An essential function of a compiler is to record the variable names used in the source
program and collect information about various attributes of each name.
These attributes may provide information about the storage allocated for a name, its
type, its scope (where in the program its value may be used), and in the case of
procedure names, such things as the number and types of its arguments, the method
of passing each argument (for example, by value or by reference), and the type
returned.
The symbol table is a data structure containing a record for each variable name, with
fields for the attributes of the name. The data structure should be designed to allow
the compiler to find the record for each name quickly and to store or retrieve data
from that record quickly.
TYPES OF COMPILERS (BASED ON NO. OF PASSES)
Single-Pass Compiler: A single-pass compiler processes the source code in a single
pass, from start to finish. It reads the source code, performs lexical analysis, syntax
analysis, and code generation in one go. Single-pass compilers are generally less
memory-intensive but may have limitations in terms of optimization.
Multi-Pass Compiler: Multi-pass compilers make multiple passes over the source
code. Each pass performs a specific task, such as creating a symbol table, generating
intermediate code, or optimizing the code. Multi-pass compilers often produce more
optimized code but may require more memory and processing time.
SOME OTHER TYPES OF COMPILERS
Front-End Compiler: The front-end of a compiler handles tasks like lexical analysis,
syntax analysis, and semantic analysis. It produces an intermediate representation of
the source code that captures its structure and meaning. Front-end compilers focus on
language-specific aspects.
Back-End Compiler: The back-end of a compiler takes the intermediate
representation generated by the front-end and performs tasks like code optimization
and code generation. It translates the intermediate code into target machine code or
another intermediate form suitable for the target platform.
Optimizing Compiler: An optimizing compiler focuses on improving the performance
of the generated code. It applies various optimization techniques to reduce execution
time, minimize memory usage, and enhance the efficiency of the program.
Just-In-Time (JIT) Compiler: JIT compilers are used in runtime environments where
they translate high-level code (often bytecode) into machine code at runtime, just
before execution. This allows for dynamic optimizations based on the runtime context
and can result in improved performance.
Static Compiler: A static compiler translates the entire source code into machine code
or intermediate code before execution. The resulting code is stored and executed as
a separate entity, making it suitable for traditional software applications.
Dynamic Compiler: A dynamic compiler generates machine code during program
execution. It is often associated with dynamic languages and JIT compilation, allowing
code to be compiled and executed on-the-fly.
Cross Compiler: A cross compiler generates code for a target platform that is
different from the platform on which the compiler itself is running. Cross compilers are
useful for developing software for embedded systems or different architectures.
Bootstrapping Compiler: A bootstrapping compiler is a compiler that is written in the
same language it compiles. It is used to compile itself, creating a self-sustaining
environment.
COMPILER CONSTRUCTION TOOLS
These tools use specialized languages for specifying and implementing specific
components, and many use quite sophisticated algorithms.
1. Parser generators that automatically produce syntax analyzers
fromagrammatical description of a programming language.
2. Scanner generators that produce lexical analyzers from a regular-expression
description of the tokens of a language.
3. Syntax-directed translation engines that produce collections of routines for
walking a parse tree and generating intermediate code.
4. Code-generator generators that produce a code generator from a collection of
rules for translating each operation of the intermediate language into the machine
language for a target machine.
5. Data flow analysis engines that facilitate the gathering of information about how
values are transmitted from one part of a program to each other part. Data- ow
analysis is a key part of code optimization.
6. Compiler-construction toolkits that provide an integrated set of routines for
constructing various phases of a compiler.
APPLICATIONS OF COMPILER TECHNOLOGY
Implementation of higher level programming languages
Optimizing compilers include techniques to improve the performance of generated code, thus
offsetting the ineficiency introduced by high-level abstractions.

Optimizations for Computer Architectures


Parallelism, Memory Hierarchy

Design of new computer architectures


RISC, Specialized architectures

Program Translations
Binary translation, Hardware synthesis, Database Query Interpreters, Compiled Simulation

Software Productivity Tools


Type checking, Bounds checking, Memory management tools
TRANSITION DIAGRAMS
Transition diagrams, also known as state transition diagrams or finite state machines,
are graphical representations used in various phases of compiler design, particularly
in lexical analysis. They depict the transitions between different states based on input
symbols and play a crucial role in recognizing and tokenizing sequences of
characters in the source code.
Transition diagrams provide a clear and visual way to represent the behavior of
lexical analyzers and finite automata. They aid in designing, analyzing, and
implementing the pattern recognition logic required to process and tokenize the input
source code. Properly designed transition diagrams contribute to the efficiency and
accuracy of the lexical analysis phase in compiler design.
Lexical Analysis can be designed using Transition Diagrams.
Finite Automata (Transition Diagram) − A Directed Graph or flowchart used to
recognize token.
The transition Diagram has two parts −
States − It is represented by circles.
Edges − States are connected by Edges Arrows.
Lexical Analysis Phase: In lexical analysis, transition diagrams are used to model the
behavior of a finite automaton that recognizes tokens in the source code. Each state
represents a particular state of recognition, and the transitions between states are
labeled with input symbols (characters). By following the transitions, the automaton
identifies and tokenizes lexemes (lexical units) such as keywords, identifiers, literals,
and operators.
Once transition diagrams are designed and understood, they can be translated into
code that implements the corresponding finite automaton in the compiler. This code is
responsible for recognizing and tokenizing lexemes during the lexical analysis phase.
TRANSITION DIAGRAM FOR AN IDENTIFIER
An identifier starts with a letter followed by letters or Digits.
BOOTSTRAPPING
Bootstrapping in the context of compiler design refers to the process of using a
simple and minimalistic compiler (or assembler) to develop a more sophisticated and
feature-rich compiler for a high-level programming language. This term originates
from the phrase "pulling oneself up by one's bootstraps," suggesting a self-reliant
and recursive process.
THE PROCESS
Initial Compiler/Assembler: A basic compiler or assembler is manually created or
obtained. This initial tool is typically simple, capable of understanding a limited
subset of the target programming language or assembly language.
Compiler Development: The initial compiler is used to write a more advanced version
of the same compiler. This advanced version can understand a larger subset of the
target language and may include additional features like error checking,
optimizations, and improved code generation.
Self-Compilation: The advanced compiler is then used to compile its own source code.
This step involves translating the source code of the advanced compiler into machine
code using the initial compiler. The resulting compiled code is the updated version of
the advanced compiler.
Iterative Improvement: With the updated advanced compiler, developers can now
enhance and extend the features of the compiler further. This process is iterative, with
each iteration improving the capabilities of the compiler and making it more powerful
and feature-rich.
Full Language Support: As the bootstrapping process continues, the compiler's ability
to handle the entire target programming language improves. It becomes capable of
understanding and compiling the full language specification, including all syntax rules
and features.
ADVANTAGES
Self-Reliance: Bootstrapping allows a development team to create a sophisticated
compiler for a language without relying on existing, complex compilers. This is
particularly useful when developing compilers for new languages.
Understanding: The bootstrapping process forces developers to understand both the
target language and the intricacies of compiler design, resulting in a deeper
understanding of language semantics and compiler construction.
Incremental Development: Bootstrapping enables incremental development.
Developers can start with a basic compiler and progressively refine it, ensuring that
each iteration is a functional improvement.
Code Quality: The self-compilation step acts as a form of validation. If the compiled
advanced compiler produces the expected results, it indicates that the compiler is
correct and functional.
CROSS COMPILER
Compilers are the tool used to translate high-level programming language to low-
level programming language. The simple compiler works in one system only, but what
will happen if we need a compiler that can compile code from another platform, to
perform such compilation, the cross compiler is introduced. In this article, we are going
to discuss cross-compiler.
A cross compiler is a compiler capable of creating executable code for a platform
other than the one on which the compiler is running. For example, a cross compiler
executes on machine X and produces machine code for machine Y.
USES
In bootstrapping, a cross-compiler is used for transitioning to a new platform. When
developing software for a new platform, a cross-compiler is used to compile
necessary tools such as the operating system and a native compiler.
For microcontrollers, we use cross compiler because it doesn’t support an operating
system.
It is useful for embedded computers which are with limited computing resources.
To compile for a platform where it is not practical to do the compiling, a cross-
compiler is used.
When direct compilation on the target platform is not infeasible, so we can use the
cross compiler.
It helps to keep the target environment separate from the built environment.
REPRESENTATION OF A COMPILER
A compiler is characterized by three languages:
The source language that is compiled.
The target language T is generated.
The implementation language that is used for writing the compiler.
WORKING
ADVANTAGES
Cross compilers can be used to develop software for multiple platforms.
They can optimize code for a specific platform, even if the development environment
is different.
Cross compilers are often used in embedded systems development, where resources
are limited.
DISADVANTAGES
Cross compilers can be complex to set up and use, and require additional software
and tools.
They may not be as efficient as native compilers in terms of memory usage.
Cross compilers require the developer to have a good understanding of the target
system’s hardware and operating system.
NATIVE COMPILERS
Native compiler are compilers that generates code for the same Platform on which it
runs. It converts high language into computer’s native language. For example, Turbo
C or GCC compiler. if a compiler runs on a Windows machine and produces
executable code for Windows, then it is a native compiler. Native compilers are
widely used because they can optimize code for the specific processor and operating
system of the host machine, resulting in faster and more efficient code execution. They
are also easier to use since they don’t require any additional setup or configuration.
Native compilers work by analyzing the source code and generating machine code
that is specific to the processor and operating system of the host machine. They can
perform various optimizations, such as loop unrolling, function inlining, and instruction
scheduling, to produce code that executes faster and more efficiently.
SOME KEY DIFFERENCES
Platform support: Native compilers generate code for the same platform as the one
they are running on, while cross compilers generate code for a different platform.
Build environment: Native compilers are built and run on the same platform, while
cross compilers require a build environment that is different from the target platform.
Performance: Native compilers may produce faster code as they can take advantage
of platform-specific optimizations, while cross compilers may produce slower code as
they have to accommodate the differences between the source and target platforms.
Development time: Cross compilers may require more development time as they need
to be tested and validated on both the source and target platforms, while native
compilers are typically faster to develop and easier to test.
Native Compiler Cross Compiler

Translates program for same hardware/platform/machine on it is Translates program for different hardware/platform/machine other
running. than the platform which it is running.

It is used to build programs for same system/machine & OS it is


It is used to build programs for other system/machine like AVR/ARM.
installed.

It is dependent on System/machine and OS It is independent of System/machine and OS

It can generate executable file like .exe It can generate raw code .hex

Generates machine code for the same platform it’s running on. Generates machine code for a different platform than it’s running on.

Turbo C or GCC is native Compiler. Keil is a cross compiler.

Used for development and testing on the same system. Used for cross-platform development, porting, and cross-compiling.

Example:- ARM compiler on a Windows machine for compiling code


Example :- GCC compiler on a Linux machine
for a Raspberry Pi.

Advantage:- Advantage:-
Typically faster and more efficient than cross compilers. Enables developers to compile code for multiple platforms.

Disadvantage:- Disadvantage:-
Cannot be used for cross-platform development Can be slower and less efficient than native compilers.
BOOTSTRAPPING
Bootstrapping is a process in which simple language is used to translate more
complicated program which in turn may handle for more complicated program. This
complicated program can further handle even more complicated program and so on.
Writing a compiler for any high level language is a complicated process. It takes lot
of time to write a compiler from scratch. Hence simple language is used to generate
target code in some stages.
Bootstrapping is a common practice in compiler development and has been used to
create many well-known compilers, such as GCC (GNU Compiler Collection). It
showcases the power of using a simple tool to create more advanced and capable
software, leading to a self-sustaining cycle of improvement.
to clearly understand the Bootstrapping technique consider a following scenario.
Suppose we want to write a cross compiler for new language X.
The implementation language of this compiler is say Y and the target code being
generated is in language Z. That is, we create XYZ.
Now if existing compiler Y runs on machine M and generates code for M then it is
denoted as YMM. Now if we run XYZ using YMM then we get a compiler XMZ.
That means a compiler for source language X that generates a target code in
language Z and which runs on machine M.
EXAMPLE
Compiler which takes C language and generates an assembly language as an output
with the availability of a machine of assembly language.
Step-1: First we write a compiler for a small of C in assembly language.
Step-2: Then using with small subset of C i.e. C0, for the source language c the
compiler is written.
Finally we compile the second compiler. using compiler 1 the compiler 2 is compiled.
Step-4: Thus we get a compiler written in ASM which compiles C and generates code
in ASM.
Bootstrapping is the process of writing a compiler for a programming language using
the language itself. In other words, it is the process of using a compiler written in a
particular programming language to compile a new version of the compiler written in
the same language.
ADVANTAGES
1. it ensures that the compiler is compatible with the language it is designed to
compile. This is because the compiler is written in the same language, so it is better
able to understand and interpret the syntax and semantics of the language.
2. it allows for greater control over the optimization and code generation process.
Since the compiler is written in the target language, it can be optimized to
generate code that is more efficient and better suited to the target platform.
DISADVATAGES
However, bootstrapping also has some disadvantages.
One disadvantage is that it can be a time-consuming process, especially for complex
languages or compilers.
It can also be more difficult to debug a bootstrapped compiler, since any errors or
bugs in the compiler will affect the subsequent versions of the compiler.

You might also like