0% found this document useful (0 votes)
270 views

A Compiler Is A Computer Program

A compiler is a computer program that translates source code written in one programming language into another target language. Common operations performed by compilers include lexical analysis, preprocessing, parsing, semantic analysis, code generation, and code optimization. Compiler construction and optimization are areas of study in computer science curriculums. Compilers can produce executable machine code, bytecode for virtual machines, or configuration files for hardware like FPGAs.

Uploaded by

api-26190816
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
270 views

A Compiler Is A Computer Program

A compiler is a computer program that translates source code written in one programming language into another target language. Common operations performed by compilers include lexical analysis, preprocessing, parsing, semantic analysis, code generation, and code optimization. Compiler construction and optimization are areas of study in computer science curriculums. Compilers can produce executable machine code, bytecode for virtual machines, or configuration files for hardware like FPGAs.

Uploaded by

api-26190816
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

A compiler is a computer program (or set of programs) that translates text written in a computer

language (the source language) into another computer language (the target language). The
original sequence is usually called the source code and the output called object code. Commonly
the output has a form suitable for processing by other programs (e.g., a linker), but it may be a
human-readable text file.

The most common reason for wanting to translate source code is to create an executable
program. The name "compiler" is primarily used for programs that translate source code from a
high-level programming language to a lower level language (e.g., assembly language or machine
language). A program that translates from a low level language to a higher level one is a
decompiler. A program that translates between high-level languages is usually called a language
translator, source to source translator, or language converter. A language rewriter is usually a
program that translates the form of expressions without a change of language.

A compiler is likely to perform many or all of the following operations: lexical analysis,
preprocessing, parsing, semantic analysis, code generation, and code optimization.

Compilers in education

Compiler construction and compiler optimization are taught at universities as part of the
computer science curriculum. Such courses are usually supplemented with the implementation of
a compiler for an educational programming language. A well-documented example is Niklaus
Wirth's PL/0 compiler, which Wirth used to teach compiler construction in the 1970s.[3] In spite
of its simplicity, the PL/0 compiler introduced several influential concepts to the field:

1. Program development by stepwise refinement (also the title of a 1971 paper by Wirth[4])
2. The use of a recursive descent parser
3. The use of EBNF to specify the syntax of a language
4. A code generator producing portable P-code
5. The use of T-diagrams in the formal description of the bootstrapping problem

Compiler output
One classification of compilers is by the platform on which their generated code executes. This is
known as the target platform.

A native or hosted compiler is one whose output is intended to directly run on the same type of
computer and operating system as the compiler itself runs on. The output of a cross compiler is
designed to run on a different platform. Cross compilers are often used when developing
software for embedded systems that are not intended to support a software development
environment.

The output of a compiler that produces code for a virtual machine (VM) may or may not be
executed on the same platform as the compiler that produced it. For this reason such compilers
are not usually classified as native or cross compilers.
Compiled versus interpreted languages

Higher-level programming languages are generally divided for convenience into compiled
languages and interpreted languages. However, there is rarely anything about a language that
requires it to be exclusively compiled, or exclusively interpreted. The categorization usually
reflects the most popular or widespread implementations of a language — for instance, BASIC is
sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC
compilers and C interpreters.

In a sense, all languages are interpreted, with "execution" being merely a special case of
interpretation performed by transistors switching on a CPU. Modern trends toward just-in-time
compilation and bytecode interpretation also blur the traditional categorizations.

There are exceptions. Some language specifications spell out that implementations must include
a compilation facility; for example, Common Lisp. Other languages have features that are very
easy to implement in an interpreter, but make writing a compiler much harder; for example,
APL, SNOBOL4, and many scripting languages allow programs to construct arbitrary source
code at runtime with regular string operations, and then execute that code by passing it to a
special evaluation function. To implement these features in a compiled language, programs must
usually be shipped with a runtime library that includes a version of the compiler itself.

Hardware compilation

The output of some compilers may target hardware at a very low level. For example a Field
Programmable Gate Array (FPGA) or structured Application-specific integrated circuit (ASIC).
Such compilers are said to be hardware compilers or synthesis tools because the programs they
compile effectively control the final configuration of the hardware and how it operates; the
output of the compilation are not instructions that are executed in sequence - only an
interconnection of transistors or lookup tables. For example, XST is the Xilinx Synthesis Tool
used for configuring FPGAs. Similar tools are available from Altera, Synplicity, Synopsys and
other vendors.

Compiler design
The approach taken to compiler design is affected by the complexity of the processing that needs
to be done, the experience of the person(s) designing it, and the resources (eg, people and tools)
available.

A compiler for a relatively simple language written by one person might be a single, monolithic
piece of software. When the source language is large and complex, and high quality output is
required the design may be split into a number of relatively independent phases, or passes.
Having separate phases means development can be parceled up into small parts and given to
different people. It also becomes much easier to replace a single phase by an improved one, or to
insert new phases later (eg, additional optimizations).
The division of the compilation processes in phases (or passes) was championed by the
Production Quality Compiler-Compiler Project (PQCC) at Carnegie Mellon University. This
project introduced the terms front end, middle end, and back end.

All but the smallest of compilers have more than two phases. However, these phases are usually
regarded as being part of the front end or the back end. The point at where these two ends meet is
always open to debate. The front end is generally considered to be where syntactic and semantic
processing takes place, along with translation to a lower level of representation (than source
code).

The middle end is usually designed to perform optimizations on a form other than the source
code or machine code. This source code/machine code independence is intended to enable
generic optimizations to be shared between versions of the compiler supporting different
languages and target processors.

The back end takes the output from the middle. It may perform more analysis, transformations
and optimizations that are for a particular computer. Then, it generates code for a particular
processor and OS.

This front-end/middle/back-end approach makes it possible to combine front ends for different
languages with back ends for different CPUs. Practical examples of this approach are the GNU
Compiler Collection, LLVM, and the Amsterdam Compiler Kit, which have multiple front-ends,
shared analysis and multiple back-ends.

One-pass versus multi-pass compilers

Classifying compilers by number of passes has its background in the hardware resource
limitations of computers. Compiling involves performing lots of work and early computers did
not have enough memory to contain one program that did all of this work. So compilers were
split up into smaller programs which each made a pass over the source (or some representation of
it) performing some of the required analysis and translations.

The ability to compile in a single pass is often seen as a benefit because it simplifies the job of
writing a compiler and one pass compilers are generally faster than multi-pass compilers. Many
languages were designed so that they could be compiled in a single pass (e.g., Pascal).

In some cases the design of a language feature may require a compiler to perform more than one
pass over the source. For instance, consider a declaration appearing on line 20 of the source
which affects the translation of a statement appearing on line 10. In this case, the first pass needs
to gather information about declarations appearing after statements that they affect, with the
actual translation happening during a subsequent pass.

The disadvantage of compiling in a single pass is that it is not possible to perform many of the
sophisticated optimizations needed to generate high quality code. It can be difficult to count
exactly how many passes an optimizing compiler makes. For instance, different phases of
optimization may analyse one expression many times but only analyse another expression once.
Splitting a compiler up into small programs is a technique used by researchers interested in
producing provably correct compilers. Proving the correctness of a set of small programs often
requires less effort than proving the correctness of a larger, single, equivalent program.

While the typical multi-pass compiler outputs machine code from its final pass, there are several
other types:

 A "source-to-source compiler" is a type of compiler that takes a high level language as its
input and outputs a high level language. For example, an automatic parallelizing compiler
will frequently take in a high level language program as an input and then transform the
code and annotate it with parallel code annotations (e.g. OpenMP) or language constructs
(e.g. Fortran's DOALL statements).
 Stage compiler that compiles to assembly language of a theoretical machine, like some
Prolog implementations
o This Prolog machine is also known as the Warren Abstract Machine (or WAM).
Bytecode compilers for Java, Python, and many more are also a subtype of this.
 Just-in-time compiler, used by Smalltalk and Java systems, and also by Microsoft .Net's
Common Intermediate Language (CIL)
o Applications are delivered in bytecode, which is compiled to native machine code
just prior to execution.

Front end

The front end analyzes the source code to build an internal representation of the program, called
the intermediate representation or IR. It also manages the symbol table, a data structure mapping
each symbol in the source code to associated information such as location, type and scope. This
is done over several phases, which includes some of the following:

1. Line reconstruction. Languages which strop their keywords or allow arbitrary spaces
within identifiers require a phase before parsing, which converts the input character
sequence to a canonical form ready for the parser. The top-down, recursive-descent,
table-driven parsers used in the 1960s typically read the source one character at a time
and did not require a separate tokenizing phase. Atlas Autocode, and Imp (and some
implementations of Algol and Coral66) are examples of stropped languages whose
compilers would have a Line Reconstruction phase.
2. Lexical analysis breaks the source code text into small pieces called tokens. Each token is
a single atomic unit of the language, for instance a keyword, identifier or symbol name.
The token syntax is typically a regular language, so a finite state automaton constructed
from a regular expression can be used to recognize it. This phase is also called lexing or
scanning, and the software doing lexical analysis is called a lexical analyzer or scanner.
3. Preprocessing. Some languages, e.g., C, require a preprocessing phase which supports
macro substitution and conditional compilation. Typically the preprocessing phase occurs
before syntactic or semantic analysis; e.g. in the case of C, the preprocessor manipulates
lexical tokens rather than syntactic forms. However, some languages such as Scheme
support macro substitutions based on syntactic forms.
4. Syntax analysis involves parsing the token sequence to identify the syntactic structure of
the program. This phase typically builds a parse tree, which replaces the linear sequence
of tokens with a tree structure built according to the rules of a formal grammar which
define the language's syntax. The parse tree is often analyzed, augmented, and
transformed by later phases in the compiler.
5. Semantic analysis is the phase in which the compiler adds semantic information to the
parse tree and builds the symbol table. This phase performs semantic checks such as type
checking (checking for type errors), or object binding (associating variable and function
references with their definitions), or definite assignment (requiring all local variables to
be initialized before use), rejecting incorrect programs or issuing warnings. Semantic
analysis usually requires a complete parse tree, meaning that this phase logically follows
the parsing phase, and logically precedes the code generation phase, though it is often
possible to fold multiple phases into one pass over the code in a compiler
implementation.

Back end

The term back end is sometimes confused with code generator because of the overlapped
functionality of generating assembly code. Some literature uses middle end to distinguish the
generic analysis and optimization phases in the back end from the machine-dependent code
generators.

The main phases of the back end include the following:

1. Analysis: This is the gathering of program information from the intermediate


representation derived from the input. Typical analyses are data flow analysis to build
use-define chains, dependence analysis, alias analysis, pointer analysis, escape analysis
etc. Accurate analysis is the basis for any compiler optimization. The call graph and
control flow graph are usually also built during the analysis phase.
2. Optimization: the intermediate language representation is transformed into functionally
equivalent but faster (or smaller) forms. Popular optimizations are inline expansion, dead
code elimination, constant propagation, loop transformation, register allocation or even
automatic parallelization.
3. Code generation: the transformed intermediate language is translated into the output
language, usually the native machine language of the system. This involves resource and
storage decisions, such as deciding which variables to fit into registers and memory and
the selection and scheduling of appropriate machine instructions along with their
associated addressing modes (see also Sethi-Ullman algorithm).

Compiler analysis is the prerequisite for any compiler optimization, and they tightly work
together. For example, dependence analysis is crucial for loop transformation.

In addition, the scope of compiler analysis and optimizations vary greatly, from as small as a
basic block to the procedure/function level, or even over the whole program (interprocedural
optimization). Obviously, a compiler can potentially do a better job using a broader view. But
that broad view is not free: large scope analysis and optimizations are very costly in terms of
compilation time and memory space; this is especially true for interprocedural analysis and
optimizations.

Interprocedural analysis and optimizations are common in modern commercial compilers from
HP, IBM, SGI, Intel, Microsoft, and Sun Microsystems. The open source GCC was criticized for
a long time for lacking powerful interprocedural optimizations, but it is changing in this respect.
Another open source compiler with full analysis and optimization infrastructure is Open64,
which is used by many organizations for research and commercial purposes.

Due to the extra time and space needed for compiler analysis and optimizations, some compilers
skip them by default. Users have to use compilation options to explicitly tell the compiler which
optimizations should be enabled.

You might also like