0% found this document useful (0 votes)
24 views12 pages

SYSTEM SOFTWARE (M1 TO M4) ShortNotes

The document provides an overview of system software, its distinction from application software, and various types of operating systems and their functionalities. It also covers language processors, including compilers, interpreters, and assemblers, as well as the processes of linking, loading, and macro usage in programming. Additionally, it details the compilation phases, design procedures for assemblers, and advanced macro facilities, emphasizing the importance of macros in simplifying code and improving efficiency.

Uploaded by

ak4athulkrishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views12 pages

SYSTEM SOFTWARE (M1 TO M4) ShortNotes

The document provides an overview of system software, its distinction from application software, and various types of operating systems and their functionalities. It also covers language processors, including compilers, interpreters, and assemblers, as well as the processes of linking, loading, and macro usage in programming. Additionally, it details the compilation phases, design procedures for assemblers, and advanced macro facilities, emphasizing the importance of macros in simplifying code and improving efficiency.

Uploaded by

ak4athulkrishna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

MODULE 1

System Software
System software is a type of computer program that is designed to run a computer’s hardware and application
programs. It is a computer software designed to provide a platform to other software. Examples of system
software include operating systems, device drivers, firmware, and utility programs. The system software is the
interface between the hardware and user applications. The operating system (OS) is the best-known example
of system software.

system software vs application software


System software and application software are two types of computer programs. System software is designed
to run a computer's hardware and provide a platform for other software to run on. It includes operating
systems, device drivers, firmware, and utility programs. Application software, on the other hand, is designed
to perform specific tasks or functions for the user. Examples of application software include word processors,
spreadsheets, web browsers, and games. The main difference between system software and application
software is that system software provides a platform for application software to run on while application
software performs specific tasks for the user.

operating system types and its functionality


Operating systems can be classified into several types based on their functionality and design. Some of the
common types of operating systems are:
1. Batch Operating System: This type of operating system is designed to process a large number of similar jobs
in batches without any user interaction.
2. Time-Sharing Operating System: This type of operating system allows multiple users to share the same
computer resources simultaneously.
3. Real-Time Operating System: This type of operating system is designed to handle real-time applications that
require immediate response from the computer system.
4. Network Operating System: This type of operating system is designed to manage and control network
resources such as servers, printers, and other devices.
5. Distributed Operating System: This type of operating system manages a group of independent computers
and makes them appear as a single computer. The functionality of an operating system includes memory
management, process management, device management, file management, security, control over system
performance, job accounting, error detecting aids, coordination between other software and users, and free
space management. The specific features provided by an operating system depend on its design and intended
use

Language processor
A language processor is a type of software that converts programming source code into machine-readable
code. There are three types of language processors:
1. Compiler: A compiler is a program that converts the entire source code into machine code in one go. It
checks for syntax errors and generates an object file that can be executed.
2. Interpreter: An interpreter is a program that converts high-level language to machine-level language line by
line. It first scans one line of a program or source code, checks for errors, and then executes it.
3. Assembler: An assembler is a program that converts assembly language into machine code. It translates
mnemonic codes into their equivalent binary codes. The choice of language processor depends on the
programming language used and the specific requirements of the application being developed.
This Note Is Uploaded to BSC CS Calicut University Study Notes
Compiler vs interpreter
A compiler and an interpreter are two types of language processors used in programming.
A compiler is a program that converts the entire source code into machine code in one go. It checks for syntax
errors and generates an object file that can be executed. The output generated by a compiler is usually faster
than that of an interpreter because the entire code is compiled before execution.
An interpreter, on the other hand, is a program that converts high-level language to machine-level language
line by line. It first scans one line of a program or source code, checks for errors, and then executes it. The
output generated by an interpreter is usually slower than that of a compiler because each line of code has to
be interpreted before execution.
The choice between using a compiler or an interpreter depends on the specific requirements of the application
being developed. Compilers are generally used for large programs where speed is important, while interpreters
are used for smaller programs where ease of use and flexibility are more important. Some programming
languages use both compilers and interpreters depending on the specific task being performed.

Loader
In computing, a loader is a program that loads machine codes of a program into the system memory. It is one
of the essential stages in the process of starting a program. In an operating system, the loader is responsible
for loading programs into memory and preparing them for execution. The loader may also perform additional
tasks such as resolving external references and relocating code to different memory locations.

Linker
A linker is a utility program that combines several separately compiled modules into one, resolving internal
differences between them. When a program is assembled/compiled, an intermediate form is produced into
which it is necessary to incorporate libraries and any other modules supplied by the user. The linker combines
these modules and libraries to create an executable file that can be run on the target system.
In computer science, a linker is a computer program that takes one or more object files generated by a compiler
and combines them into one executable program. The linker resolves external references between object files
and libraries, assigns memory addresses to code and data sections, and generates relocation information for
use by the loader.
The process of linking is important because it allows programmers to write modular code in separate files,
which can be compiled independently and then linked together to form a complete program.
Dynamic and static linking are two methods used by a linker to combine multiple object files into a single
executable program. Static linking involves combining all necessary object files into a single executable file at
compile time. This means that all libraries and modules required by the program are included in the final
executable, making it self-contained and independent of external dependencies. Static linking can result in
larger executable files but can also provide faster startup times and better performance.
Dynamic linking, on the other hand, involves postponing the resolution of some symbols until runtime. When
the program is executed, dynamic link libraries (DLLs) or shared objects are loaded as needed. This allows
multiple programs to share common libraries, reducing memory usage and disk space requirements. Dynamic
linking can result in smaller executable files but may also lead to slower startup times due to the need to load
external libraries at runtime.
Both dynamic and static linking have their advantages and disadvantages, and the choice between them
depends on factors such as performance requirements, memory usage constraints, and compatibility with
other software components.

This Note Is Uploaded to BSC CS Calicut University Study Notes


Macros
Macros are a useful feature in programming languages that allow a sequence of instructions to be defined and
used multiple times throughout a program. Macros are especially useful when the same sequence of
instructions is used repeatedly, possibly by different programmers working on a project.
In general, macros are used to simplify code and make it more readable by replacing complex or repetitive
code with a single macro statement. Some pre-compilers also use the macro concept. However, in higher-level
languages, any language statement is about as easy to write as an assembler macro statement.
Macros can be defined using special syntax provided by the programming language or using pre-processor
directives. The use of macros can improve code readability and maintainability, but it can also lead to code
bloat if not used judiciously.
Macro expansion is a feature in programming languages that allows a programmer to define pseudo-
operations, typically operations that are generally desirable, are not implemented as part of the processor
instruction set, and can be implemented as a sequence of instructions.
Each use of a macro generates new program instructions; the macro has the effect of automating writing of
the program. Macros can be defined and used in many programming languages, like C, C++, etc.
Macros are especially useful when the same sequence of instructions is used repeatedly, possibly by different
programmers working on a project. In general, macros are used to simplify code and make it more readable by
replacing complex or repetitive code with a single macro statement. Some pre-compilers also use the macro
concept. However, in higher-level languages, any language statement is about as easy to write as an assembler
macro statement.

Assembler directives advanced assembler directives


Assembler directives are special instructions that are used by the assembler to control the assembly process.
They provide additional information to the assembler about how to assemble the program, such as where to
place code and data in memory, how to handle symbols and labels, and how to generate object files.
Advanced assembler directives are a set of more complex directives that provide additional functionality
beyond basic assembler directives. These advanced directives include conditional assembly, macro expansion,
and file inclusion.
Conditional assembly allows the programmer to include or exclude sections of code based on certain
conditions. This can be useful for creating different versions of a program for different platforms or
configurations.
Macro expansion allows the programmer to define macros that can be expanded into larger sections of code.
This can simplify code and make it more readable by replacing complex or repetitive code with a single macro
statement. File inclusion allows the programmer to include external files in their program. This can be useful
for including libraries or other modules that are needed by the program.
Overall, advanced assembler directives provide powerful tools for controlling the assembly process and
creating efficient, maintainable programs.

Pass structure of a assembler


The pass structure of an assembler refers to the process by which an assembler translates a program from
assembly language to machine code. A two-pass assembler performs this translation in two passes or phases.
In the first pass, the assembler reads through the entire source code and builds a symbol table that contains
information about all labels and symbols used in the program. The assembler also generates intermediate code
that includes information about memory locations and addresses.

This Note Is Uploaded to BSC CS Calicut University Study Notes


In the second pass, the assembler uses the symbol table and intermediate code generated in the first pass to
generate machine code. The assembler replaces symbolic addresses with numeric addresses and generates
object files that can be executed on a target system.
The two-pass structure of an assembler allows for more efficient translation of assembly language programs
into machine code. By building a symbol table in the first pass, the assembler can resolve any forward
references in subsequent passes, resulting in more accurate and efficient translation.

phases in compilation
Compilation is the process of translating a high-level programming language into machine code that can be
executed by a computer. The following is a typical breakdown of the overall task of a compiler in an
approximate sequence:
1. Lexical Analysis: This phase involves breaking up the source code into tokens, which are meaningful units
such as keywords, identifiers, and operators.
2. Syntax Analysis: This phase involves analysing the structure of the program to ensure that it conforms to
the rules of the programming language's grammar. This phase also generates an abstract syntax tree (AST) that
represents the structure of the program.
3. Semantic Analysis: This phase involves checking for semantic errors in the program, such as type mismatches
and undeclared variables.
4. Intermediate Code Generation: This phase involves generating an intermediate representation of the
program that can be optimized before being translated into machine code.
5. Code Optimization: This phase involves transforming the intermediate code to improve its efficiency and
reduce its size.
6. Code Generation: This phase involves translating the optimized intermediate code into machine code that
can be executed by a computer.
7. Symbol Table Management: This phase involves managing information about symbols used in the program,
such as variable names and function names.
8. Error Handling: Finally, error handling is performed to generate error messages if there are errors in any of
these phases.
These phases may vary depending on the specific compiler implementation and programming language being
compiled, but they provide a general overview of what happens during compilation.

Design procedure of a two pass assembler


The design procedure of a two-pass assembler involves the following six steps:
1. Specification of the problem: This step involves defining the requirements and objectives of the assembler.
2. Specification of the data structures: This step involves identifying the data structures that will be used by
the assembler, such as symbol tables and intermediate code representations.
3. Defining the format of data structures: This step involves specifying how data will be stored in each data
structure, such as how symbols will be represented in a symbol table.
4. Specifying the algorithm: This step involves defining the algorithm that will be used by the assembler to
translate assembly language into machine code.
5. Looking for modularity that ensures the capability of a program to be subdivided into independent
programming units: This step involves breaking down the assembler into smaller modules that can be
developed and tested independently.
6. Repeating steps 1-5 on modules for accuracy and checking errors: This step involves testing each module
individually to ensure that it works correctly before integrating it with other modules.
By following these steps, a programmer can design an efficient and accurate two-pass assembler that can
translate assembly language programs into machine code.
This Note Is Uploaded to BSC CS Calicut University Study Notes
Module-2
Macros and its function
Macros are single-line abbreviations for a certain group of instructions. Once the macro is defined, these
groups of instructions can be used anywhere in a program. Macros are similar to subroutines or procedures,
but there are important differences between them. A subroutine is a section of the program that is written
once and can be used many times by simply calling it from any point in the program. Similarly, a macro is a
section of code that the programmer writes (defines) once and then can use many times.
Macros have several functions in programming. They can simplify code by reducing redundancy and making it
easier to read and understand. Macros can also improve program efficiency by reducing the number of
instructions executed at runtime. Additionally, macros can be used for booting during system startup since
they need not be compiled to use, but just macro processing is enough which is a small task.

Types of macros
1. Simple Macros: These are the most basic type of macros, which simply replace a single line of code with
another line of code.
2. Parameterized Macros: These macros allow you to pass parameters to the macro, which can be used to
customize the behaviour of the macro.
3. Conditional Macros: These macros allow you to define different code blocks based on certain conditions.
This can be useful for creating more flexible and adaptable code.
4. Looping Macros: These macros allow you to repeat a block of code multiple times, with different values for
each iteration.
5. Recursive Macros: These macros allow you to call the macro from within itself, allowing for more complex
and powerful functionality.

Macro Expansion
Macro expansion is the process of replacing a macro call with its corresponding code. Once a macro is defined,
the macro name can be used instead of using the entire instruction sequence again and again. When the
program is compiled or assembled, the macro processor expands each macro call in the source code by
replacing it with its corresponding code.
Macro expansion can happen at either compile-time or runtime, depending on the programming language and
implementation. In compiled languages, macro expansion always happens at compile-time. This means that
the expanded code is generated before the program is run. In interpreted languages, macro expansion may
happen at runtime, which means that the expanded code is generated as needed during program execution.

Types of parameters
There are several types of macro parameters that can be used in programming. Here are some common types:
1. Positional Parameters: These are the most basic type of macro parameters, which are passed to the macro
in a specific order. The macro processor replaces each occurrence of a positional parameter with the
corresponding argument passed to the macro.
2. Keyword Parameters: These parameters allow you to pass arguments to a macro using keywords instead of
positional order. This can make it easier to understand what each argument represents.
3. Default Parameters: These parameters allow you to specify default values for arguments that are not
explicitly passed to the macro.
This Note Is Uploaded to BSC CS Calicut University Study Notes
4. Variable-Length Parameter Lists: These parameters allow you to pass a variable number of arguments to a
macro. This can be useful when you don't know how many arguments will be needed ahead of time.
5. Type Parameters: These parameters allow you to specify the data type of an argument passed to a macro,
which can help ensure that the correct type is used.

Advanced Facilities in macro


There are several advanced facilities of macros that can be used in programming. Here are some common
ones:
1. Nested Macros: These allow you to define macros within macros, which can help simplify code and make it
more modular.
2. Macro Libraries: These are collections of pre-defined macros that can be used across multiple programs.
This can save time and effort by providing a set of commonly-used macros that can be easily included in new
programs.
3. Macro Debugging: This allows you to debug macros just like any other part of your program. This can be
useful for identifying errors or issues with macro code.
4. Macro Optimization: This involves optimizing macro code to improve performance and reduce memory
usage. This can be done by minimizing the number of macro calls, reducing the size of macro definitions, and
other techniques.
5. Macro Assembler Directives: These are special instructions that control how the assembler processes macro
definitions and expansions. They allow you to customize the behaviour of the macro processor and optimize
your code for specific platforms or architectures.

Data structure used in macro


Macros do not typically use data structures in the same way that other parts of a program might. However,
some macro processors may use data structures internally to manage macro definitions and expansions.
For example, a macro processor might use a hash table or other data structure to store the names and
definitions of macros. When a macro call is encountered in the source code, the processor can quickly look up
the corresponding definition in the hash table and expand it.
Similarly, some macro processors may use stacks or other data structures to manage nested macros. This allows
them to keep track of which macros are currently being expanded and ensure that each one is expanded
correctly. Overall, while macros themselves do not typically use data structures directly, they may rely on them
internally to manage their behaviour and ensure correct operation.

Features of macro instruction


The features of macro instructions can vary depending on the specific macro language and implementation
being used. However, here are some common features that are often included:
1. Parameterized Macros: These allow you to define macros that take one or more parameters, which can be
used to customize the behaviour of the macro.
2. Conditional Assembly: This allows you to include or exclude parts of a macro definition based on certain
conditions, such as the value of a variable or the presence of a certain feature.
3. Macro Libraries: These are collections of pre-defined macros that can be easily included in new programs.
This can save time and effort by providing a set of commonly-used macros that can be easily reused.
4. Debugging Facilities: These allow you to debug macros just like any other part of your program. This can be
useful for identifying errors or issues with macro code.
5. Code Generation: Macros can be used to generate code automatically based on certain parameters or
conditions. This can help simplify complex tasks and reduce the amount of manual coding required.
This Note Is Uploaded to BSC CS Calicut University Study Notes
6. Nesting: Macros can be nested within other macros, allowing for more complex behaviour and greater
flexibility in programming.

Implementing Two pass algorithm


The two-pass algorithm is a method for designing macro pre-processors that processes input data into two
passes. In the first pass, the algorithm handles the definition of the macro, and in the second pass, it handles
various calls for the macro. Here is a brief overview of how to implement a two-pass algorithm:
1. In the first pass, scan through the source code and identify all macro definitions. For each definition, store
its name and parameters in a table or other data structure.
2. In the second pass, scan through the source code again and identify all macro calls. For each call, look up
the corresponding macro definition in your table and expand it.
3. During expansion, replace any occurrences of macro parameters with their corresponding arguments from
the call.
4. Repeat steps 2-3 until all macro calls have been expanded.
5. Once all macros have been expanded, output the final source code with all macros replaced by their
expansions. This is just a high-level overview of how to implement a two-pass algorithm for macro pre-
processing. The specific details will depend on your programming language and implementation.

This Note Is Uploaded to BSC CS Calicut University Study Notes


MODULE 3
Types Of loaders
A loader is a major component of an operating system that ensures all necessary programs and libraries are
loaded, which is essential during the startup phase of running a program. It places the libraries and programs
into the main memory in order to prepare them for execution. Loading involves reading the contents of the
executable file that contains the instructions of the program and then doing other preparatory tasks that are
required in order to prepare the executable for running. There are two types of loaders discussed in this PDF
file: relocating loaders and binary symbolic loaders (BSS). Relocating loaders are especially useful in a dynamic
runtime environment, wherein the link and load origins are highly dependent upon the runtime situations.
Similarly, these loaders can work efficiently with support from the operating system and utilize memory and
other resources efficiently. BSS is an example of a relocating loader that is used to avoid possible assembling
of all subroutines when a single subroutine is changed.

Program Linking
When a modern program comprises several procedures or subroutines together with the main program
module, the translator (such as a compiler) will translate them all independently into distinct object modules
usually stored in the secondary memory. Execution of the program in such cases is performed by linking
together these independent object modules and loading them into the main memory. Linking of various object
modules is done by the linker. The linking process is done only once, even though the program may be
repeatedly executed. This flexibility of allocation and relocation helps efficient utilization of the main memory.

GNU linker
The GNU linker (ld) is a command-line tool that takes the names of all the object files, and possibly libraries,
to be linked as arguments. It runs on all of the same host platforms as the GNU compiler. The linker is used to
combine multiple object files into a single executable file or library file. It also has a scripting language that can
be used to exercise tighter control over the object file that is output.

Relocating Linking Loaders


Relocating Linking Loaders are a type of loader that can work efficiently with support from the operating system
and utilize memory and other resources efficiently. They are especially useful in a dynamic runtime
environment, wherein the link and load origins are highly dependent upon the runtime situations. Additionally,
to avoid possible assembling of all subroutines when a single subroutine is changed and to perform the task of
allocation and linking for the programmer, the general class of relocating loader was introduced. Binary
symbolic loader (BSS) is an example of a relocating loader.

Loader Schemas
There are different types of loader schemes, each with its own advantages. One such scheme is the
general loading scheme, which saves memory and makes it available for the user program as loaders are
smaller in size than assemblers. The loader replaces the assembler
absolute loading scheme, which loads a program into memory at a fixed location. This scheme is simple but
inflexible since it requires that all programs be loaded into memory at the same location.
relocating loading scheme, which allows a program to be loaded into any part of memory and still execute
correctly. This type of loader adjusts all references to memory locations in the program so that they refer to
their correct locations in memory.

This Note Is Uploaded to BSC CS Calicut University Study Notes


Binders
A binder is a software utility that combines two or more files into a single file. To bind files, the user selects a
list of files to be put into a host file and the host file, which can be renamed anything the user would like,
compresses the selected files and saves them all in one place under one name. When the user clicks on the
host file, the embedded files are automatically decompressed and, if they contain an application (that is, if the
package includes an executable file), the application is run. One of the most popular binders was Microsoft's
"Pack and Go" feature in PowerPoint. It allowed users to save their slide shows (.ppt files), graphics (.gif, .jpg,
or .bmp files), and sound (.mid, .wav or .au files) in one file under one name.

Linking Loader
Linking loaders are a special system program that gathers various object modules, links them together to
produce a single executable binary program, and loads them into memory. This category of loaders leads to a
popular class of loaders called direct-linking loaders. The loaders used in these situations are usually called
linking loaders, which link the necessary library functions and symbolic references. Essentially, linking loaders
accept and link together a set of object programs and a single file to load them into the core. Similarly, linking
loaders additionally perform relocation and overcome disadvantages of other loading schemes.

Overlays
In a general computing sense, overlaying means "the process of transferring a block of program code or other
data into internal memory, replacing what is already stored". Overlaying is a programming method that allows
programs to be larger than the computer's main memory. An embedded system would normally use overlays
because of the limitation of physical memory, which is internal memory for a system-on-chip, and the lack of
virtual memory facilities.

Dynamic Linking
Dynamic linking is a technique used in computer programming to allow multiple programs to share a single
copy of a subroutine or library. For example, run-time support routines for a high-level language like C can be
shared among several executing programs using dynamic linking. With a program that allows its user to
interactively call any of the subroutines of a large mathematical and statistical library, all of the library
subroutines could potentially be needed, but only a few will actually be used in any one execution. Dynamic
linking can avoid the necessity of loading the entire library for each execution except those necessary
subroutines.
Program Relocation
Relocatable programs are designed to be relocated to execute from a storage area other than the one
designated for it at the time of its coding or translation. This is possible because relocatable programs consist
of a program and relevant information for its relocation. The information provided can be used to relocate the
program to execute from a different storage area. On the other hand, non-relocatable programs cannot be
made to execute in any area of storage other than the one designated for it at the time of its coding or
translation.
Relocation is performed by relocating loaders, which are responsible for loading and linking object modules
and performing relocation as necessary. Relocation involves adjusting addresses in an object module so that
they refer to the correct memory locations when loaded into memory. This process is essential when loading
object modules into memory at different locations than where they were originally compiled or assembled.

This Note Is Uploaded to BSC CS Calicut University Study Notes


MODULE 4
Phases of compiler
The different phases of a compiler are as follows:
1. Lexical analysis
2. Syntax analysis
3. Semantic analysis
4. Intermediate code generation
5. Code optimization
6. Code generation
Each phase takes input from its previous stage, has its own representation of source program, and feeds its
output to the next phase of the compiler. All of these phases involve symbol table management and error
handling to ensure an efficient and accurate translation process.

Compiler Classification
Compilers are classified according to the input/output, internal structure, and behaviour at runtime. Here are
some of the different types of compilers that you can use:
1. Cross compilers: A compiler that runs on one platform but generates code for another platform.
2. One-pass or multi-pass compiler: A compiler that performs analysis and translation in a single pass or
multiple passes, respectively. One-pass compilers are faster than multi-pass compilers, but multi-pass
compilers can handle larger programs.
3. Source-to-source compiler: A compiler that takes a high-level language as input and produces an output
in the same high-level language.
4. Stage compiler: A compiler that is used for compiling assembly language of a machine.
5. Just-in-time (JIT) compiler: A type of dynamic compiler that compiles code at runtime instead of a head-
of-time compilation. JIT compilers are commonly used in virtual machines to improve performance.
Each type of compiler has its own unique characteristics and advantages depending on the specific needs of
your project.

Structure Of Compiler
This Note Is Uploaded to BSC CS Calicut University Study Notes
The scanner is a phase of the compiler that allows us to group the input characters into tokens and construct
a symbol table that is used for contextual analysis of the program code in the later phase. The tokens are also
known as lexemes, which include keywords, identifiers, operators, constants, and comments. The scanner
phase is also known as lexical analysis that helps group input characters into lexical units or tokens. The output
of the lexical phase is a stream of tokens.

The parser is a phase of the compiler that helps group tokens into syntactical units. The output we get from
the parser phase is a parse tree, which is a tree representation of a program. The program structure we get
from the parser phase is defined using context-free grammars, which is defined by the following five
components:
1. A finite terminal vocabulary Vt, which is a token produced by a scanner
2. A finite set of symbols known as the non-terminal vocabulary Vn
3. A start symbol S for the grammar
4. A finite set of productions, P
5. Push down automata

Contextual checkers are used by the compiler to analyse the parse tree for context-sensitive information,
which is known as static semantics. The output of the semantic analysis phase is an annotated parse tree. The
contextual checkers are responsible for checking that the program code follows the rules of the programming
language and that it makes sense in terms of its context.

The symbol table is a data structure used by the compiler to store information about the identifiers and their
attributes encountered in the source program. The symbol table is used by the compiler in later phases for
contextual analysis of the program code. The information stored in the symbol table can be a name
encountered in the source program or an attribute.

The error handler is another important component of a compiler that helps report and rectify an error that
occurs during the execution of the source program. The error handler is responsible for detecting errors,
reporting them to the user, and recovering from them if possible.

This Note Is Uploaded to BSC CS Calicut University Study Notes


The intermediate code generator is a phase of the compiler that translates the source code into target code.
An intermediate code is generated when a direct target code is not desired. The two main reasons, which are
responsible for the generation of an intermediate code rather than generating a direct target code, are:
1. The intermediate form is a simple version of the source code that helps an optimizer to apply the
optimizations, such as common sub-expression elimination and strength reduction.
2. Many compilers, such as cross-compilers generate target code for many CPUs.
The data structure that we pass between the synthesis and analysis phase is called Intermediate
Representation (IR) of the program. The intermediate representation in a source program can be:
1. Assembly language
2. Abstract syntax tree

The code optimizer is a phase of the compiler that reconstructs a parse tree to reduce the size of the tree or
to reproduce an equivalent tree that gives more efficient code. The following are some examples of code
optimization techniques:
1. Constant folding: Constant folding allows us to reduce calculation in a program code by evaluating constant
expressions at compile-time instead of run-time.
2. Loop-constant code motion: Loop-constant code motion helps reduce the calculation inside a loop by
moving invariant computations outside the loop.
3. Induction variable reduction: During the execution of a program code, most of the processing time is spent
in the body of the loops. Induction variable reduction helps in improving performance by reducing the
execution time inside a loop.

The code generator is a phase of the compiler that generates the target code from the intermediate code. The
target code can be machine code or assembly language. There are two approaches to generating target code:
1. Generate code for a specific machine: In this approach, the compiler generates code for a specific machine
architecture. This approach produces highly optimized and efficient code, but it requires a separate
compilation process for each target machine.
2. Generate code for a ‘general’ or abstract machine: In this approach, the compiler generates code for an
abstract machine that is independent of any specific hardware architecture. Then, further translators are used
to turn the abstract code into code for specific machines. This approach produces less efficient but more
portable code.

A peephole optimizer is a type of code optimizer that scans small segments of the target code to improve the
efficiency of instructions in a program code. Peephole optimization is the last phase of the compilation process,
which helps us to discard redundant instructions in the program code.
The peephole optimizer works by examining a small window of instructions, typically three to five instructions
long, and looking for patterns that can be replaced with more efficient sequences. For example, it might replace
a sequence of two or more instructions with a single instruction that performs the same operation.

This Note Is Uploaded to BSC CS Calicut University Study Notes

You might also like