Lab Manual SPOSL
Lab Manual SPOSL
Learning Outcomes:
Theory:
Output the object program and provide other information required for linker and loader
Pass I Tasks:
Perform processing of assembler directives(e.g. BYTE, RESW directives can affect address
assignment)
I={Sf,Mf}
where,
Sf=Source Code File
Mf=Mnemonic Table
O={St,Lt,Ic}
Where,
St=Symbol
Lt=Literal
Ic=Intermediate Code File
St={N,A}
where,
N=Name Of Symbol
A=Address Of Symbol
Lt={N,A}
where,
N=Name Of Literal
A=Address Of Literal
T=Variant ΙΙ
D={Ar,Fl,Sr}
Where,
Ar=Array
Fl=File
Sr=Structure
Testing Method:
Use unit testing method for testing the functions. Test the functionalities using functional testing.
FAQs:
Which variant is used in implementation? Why?
Which intermediate data structures are designed and implemented?
Which assembler is implemented?
Review Questions:
What is two pass assembler?
What is the significance of symbol table?
Explain the assembler directives EQU, ORIGIN.
How literals are handled in pass I?
What are the tasks done in Pass I?
How error handling is done in pass I?
Learning Outcomes:
Theory:
Output the object program and provide other information required for linker and loader
Pass II Tasks:
Assemble instructios(generate opcode and look up addresses)
Generate data values defined by BYTE, WORD
Perform processing of assembler directives(not done in pass I)
Write the object program and the assembly listing
I={Ic,St,Lt}
where,
Ic=Intermediate Code File
St=Symbol table
Lt=Literal table
St={N,A}
where,
N=Name Of Symbol
A=Address Of Symbol
Lt={N,A}
where,
N=Name Of Literal
A=Address Of Literal
O={o}
Where,
o=Output File(M/C Code File)
T=Varient ΙΙ
D={Ar,Fl,Sr}
Where,
Ar=Array
Fl=File
Sr=Structure
Steps to do /algorithm:
Read the intermediate code file generated in pass I.
Search symbol and literal tables to use in machine code generation.
Generate the machine code.
FAQs:
Which variant of 2 pass assembler is implemented?
What type of data structures designed, used?
Oral/Review Questions:
What is two pass assembler?
What is the significance of symbol table?
How literal table and pooltab is used in pass II?
What are the tasks done in Pass II?
How error handling is done in pass II?
How symbol and literal tables are referred in pass II?
PROBLEM Design suitable data structures and implement pass-I of a two-pass macro-
STATEMENT processor using OOP features in Java
/DEFINITION
Theory:
Macro processing feature allows the programmer to write shorthand version of a program
(modular programming). The macro processor replaces each macro invocation with the
corresponding sequence of statements i.e. macro expansion.
Tasks done by the macro processor
Recognize macro definitions
Save the macro definition recognize macro calls
Expand macro calls
Tasks in pass I of a two pass macro processor
Recognize macro definitions
Oral/Review Questions:
What is macro and macro processor?
What is MDT, MNT?
What is nested macro?
What are the tasks done in pass I of macro processor?
How macro call definitions are handled in pass I?
How formal and actual parameters are linked?
What are the steps to implement pass I of macro processor?
Revised
On: 18/12/2017
TITLE Pass I of a two pass assembler.
Learning Outcomes:
Theory:
Pass I Tasks:
I={Sf,Mf}
where,
Sf=Source Code File
Mf=Mnemonic Table
O={St,Lt,Ic}
Where,
St=Symbol
Lt=Literal
Ic=Intermediate Code File
St={N,A}
where,
N=Name Of Symbol
A=Address Of Symbol
Lt={N,A}
where,
N=Name Of Literal
A=Address Of Literal
T=Variant ΙΙ
D={Ar,Fl,Sr}
Where,
Ar=Array
Fl=File
Sr=Structure
Testing Method:
Use unit testing method for testing the functions. Test the functionalities
using functional testing.
FAQs:
Which variant is used in implementation? Why?
Which intermediate data structures are designed and implemented?
Which assembler is implemented?
Review Questions:
What is two pass assembler?
What is the significance of symbol table?
Explain the assembler directives EQU, ORIGIN.
How literals are handled in pass I?
What are the tasks done in Pass I?
How error handling is done in pass I?
Learning Outcomes:
Theory:
Pass II Tasks:
I={Ic,St,Lt}
where,
Ic=Intermediate Code File
St=Symbol table
Lt=Literal table
St={N,A}
where,
N=Name Of Symbol
A=Address Of Symbol
Lt={N,A}
where,
N=Name Of Literal
A=Address Of Literal
O={o}
Where,
o=Output File(M/C Code File)
T=Varient ΙΙ
D={Ar,Fl,Sr}
Where,
Ar=Array
Fl=File
Sr=Structure
Steps to do /algorithm:
Read the intermediate code file generated in pass I.
Search symbol and literal tables to use in machine code generation.
Generate the machine code.
FAQs:
Which variant of 2 pass assembler is implemented?
What type of data structures designed, used?
Oral/Review Questions:
What is two pass assembler?
What is the significance of symbol table?
How literal table and pooltab is used in pass II?
What are the tasks done in Pass II?
How error handling is done in pass II?
How symbol and literal tables are referred in pass II?
Theory:
Macro processing feature allows the programmer to write
shorthand version of a program (modular programming). The macro
processor replaces each macro invocation with the corresponding
sequence of statements i.e. macro expansion.
Tasks done by the macro processor
Recognize macro definitions
Save the macro definition recognize macro calls
Expand macro calls
Tasks in pass I of a two pass macro processor
Recognize macro definitions
Steps to do /algorithm:
Read .asm file.
Create MNT and MDT.
Create ALA.
Create intermediate code file.
FAQs:
Which data structures are developed?
Which form of nested macro is handled?
Oral/Review Questions:
What is macro and macro processor?
What is MDT, MNT?
What is nested macro?
What are the tasks done in pass I of macro processor?
How macro call definitions are handled in pass I?
How formal and actual parameters are linked?
What are the steps to implement pass I of macro processor?
Learning Outcomes:
The students will be able to
Expand the macro call statements
Link the actual parameters with the formal parameter.
Demonstrate the use of various data structures in Pass II which are
created in Pass I.
Theory:
Macro processing feature allows the programmer to write
shorthand version of a program (modular programming). The macro
processor replaces each macro invocation with the corresponding
sequence of statements i.e. macro expansion.
Tasks done by the macro processor
Recognize macro definitions
Save the macro definition recognize macro calls
Expand macro calls
Tasks in pass I of a two pass macro processor
Recognize macro definitions
Steps to do /algorithm:
Read .asm file.
Create MNT and MDT.
Create ALA.
Create intermediate code file.
INSTRUCTIONS FOR WRITING JOURNAL:
• Title
• Problem Definition
• Objective: Intention behind study
• Software & Hardware requirements
• Explanation of the assignment
• Algorithm or Flowchart
• Developing and testing the program
• Program listing & test results
• Conclusion
FAQs:
How are nested macros handled?
How macro call within macro definition is handled?
Which type of parameters handled?
Oral/Review Questions:
What is macro and macro processor?
What is macro call nested within macro definition?
What are the tasks done in pass II of macro processor?
How macro call statements are expanded in pass II?
How formal and actual parameters are linked?
Learning Outcomes:
The students will be able to
Expand the macro call statements
Link the actual parameters with the formal parameter.
Demonstrate the use of various data structures in Pass II which are created in Pass I.
Theory:
Macro processing feature allows the programmer to write shorthand version of a program
(modular programming). The macro processor replaces each macro invocation with the
corresponding sequence of statements i.e. macro expansion.
Tasks done by the macro processor
Recognize macro definitions
Save the macro definition recognize macro calls
Expand macro calls
Tasks in pass I of a two pass macro processor
Recognize macro definitions
Steps to do /algorithm:
Read .asm file.
Create MNT and MDT.
Create ALA.
Create intermediate code file.
INSTRUCTIONS FOR WRITING JOURNAL:
• Title
• Problem Definition
• Objective: Intention behind study
• Software & Hardware requirements
• Explanation of the assignment
• Algorithm or Flowchart
• Developing and testing the program
• Program listing & test results
• Conclusion
FAQs:
How are nested macros handled?
How macro call within macro definition is handled?
Which type of parameters handled?
Oral/Review Questions:
What is macro and macro processor?
What is macro call nested within macro definition?
What are the tasks done in pass II of macro processor?
How macro call statements are expanded in pass II?
How formal and actual parameters are linked?
What are the steps to implement pass II of macro processor?
Group B-ASSIGNMENT NUMBER: 02
Revised On: 10/02/2018
TITLE Lexical Analysis to generate tokens
Learning Outcomes:
The students will be able to
Theory:
During the first phase the compiler reads the input and converts strings in the source to tokens. With regular
expressions we can specify patterns to lex so it can generate code that will allow it to scan and match strings
in the input. Each pattern specified in the input to lex has an associated action. Typically an action returns a
token that represents the matched string for subsequent use by the parser. Initially we will simply print the
matched string rather than return a token value.
The following represents a simple pattern, composed of a regular expression that scans for identifiers. Lex
will read this pattern and produce C code for a lexical analyzer that scans for identifiers.
This pattern matches a string of characters that begins with a single letter followed by zero or more letters or
digits. This example nicely illustrates operations allowed in regular expressions:
• concatenation
Any regular expression expressions may be expressed as a finite state automaton (FSA). We can represent an
FSA using states, and transitions between states. There is one start state and one or more final or accepting
states.
In Figure, state 0 is the start state and state 2 is the accepting state. As characters are read we make a
transition from one state to another. When the first letter is read we transition to state 1. We remain in state 1
as more letters or digits are read. When we read a character other than a letter or digit we transition to
accepting state 2. Any FSA may be expressed as a computer program. For example, our 3-state machine is
easily programmed:
goto state0
state1: read c
goto state2
This is the technique used by lex. Regular expressions are translated by lex to a computer program that
mimics an FSA. Using the next input character and current state the next state is easily determined by
indexing into a computer-generated state table.
Now we can easily understand some of lex’s limitations. For example, lex cannot be used to
recognize nested structures such as parentheses. Nested structures are handled by incorporating a stack.
Whenever we encounter a “(” we push it on the stack. When a “)” is encountered we match it with the top of
the stack and pop the stack. However lex only has states and transitions between states. Since it has no stack
it is not well suited for parsing nested structures. Yacc augments an FSA with a stack and can process
constructs such as parentheses with ease. The important thing is to use the right tool for the job. Lex is good
at pattern matching. Yacc is appropriate for more challenging tasks.
Regular expressions are used for pattern matching. A character class defines a single character and
normal operators lose their meaning. Two operators allowed in a character class are the hyphen
(“-”) and circumflex (“^”). When used between two characters the hyphen represents a range of
characters. The circumflex, when used as the first character, negates the expression. If two patterns
match the same string, the longest match wins. In case both matches are the same length, then the
first pattern listed is used.
%%
Input is copied to output one character at a time. The first %% is always required, as there must
always be a rules section. However if we don’t specify any rules then the default action is to match
everything and copy it to output. Defaults for input and output are stdin and stdout, respectively.
Here is the same example with defaults explicitly coded:
%%
/* match everything except newline */
. ECHO;
/* match newline */
\n ECHO;
%%
int yywrap(void) {
return 1;
}
int main(void) {
yylex();
return 0;
}
FAQs:
What is lexical analysis?
What is LEX?
Oral/Review Questions:
What are lexemes?
What is pattern recognition?
What is lexical analysis?
What is LEX?
Pre-requisite:
Learning Outcomes:
The students will be able to
Theory:
During the first phase the compiler reads the input and converts strings in the source to tokens. With regular
expressions we can specify patterns to lex so it can generate code that will allow it to scan and match strings
in the input. Each pattern specified in the input to lex has an associated action. Typically an action returns a
token that represents the matched string for subsequent use by the parser. Initially we will simply print the
matched string rather than return a token value.
The following represents a simple pattern, composed of a regular expression that scans for identifiers. Lex
will read this pattern and produce C code for a lexical analyzer that scans for identifiers.
This pattern matches a string of characters that begins with a single letter followed by zero or more letters or
digits. This example nicely illustrates operations allowed in regular expressions:
• concatenation
Any regular expression expressions may be expressed as a finite state automaton (FSA). We can represent an
FSA using states, and transitions between states. There is one start state and one or more final or accepting
states.
In Figure, state 0 is the start state and state 2 is the accepting state. As characters are read we make a
transition from one state to another. When the first letter is read we transition to state 1. We remain in state 1
as more letters or digits are read. When we read a character other than a letter or digit we transition to
accepting state 2. Any FSA may be expressed as a computer program. For example, our 3-state machine is
easily programmed:
start: goto state0
state0: read c
goto state0
state1: read c
goto state2
This is the technique used by lex. Regular expressions are translated by lex to a computer program that
mimics an FSA. Using the next input character and current state the next state is easily determined by
indexing into a computer-generated state table.
Now we can easily understand some of lex’s limitations. For example, lex cannot be used to
recognize nested structures such as parentheses. Nested structures are handled by incorporating a stack.
Whenever we encounter a “(” we push it on the stack. When a “)” is encountered we match it with the top of
the stack and pop the stack. However lex only has states and transitions between states. Since it has no stack
it is not well suited for parsing nested structures. Yacc augments an FSA with a stack and can process
constructs such as parentheses with ease. The important thing is to use the right tool for the job. Lex is good
at pattern matching. Yacc is appropriate for more challenging tasks.
Regular expressions are used for pattern matching.
Two patterns have been specified in the rules section. Each pattern must begin in column one. This
is followed by whitespace (space, tab or newline) and an optional action associated with the pattern.
The action may be a single C statement, or multiple C statements, enclosed in braces. Anything not
starting in column one is copied verbatim to the generated C file. We may take advantage of this
behavior to specify comments in our lex file. In this example there are two patterns, “.” and “\n”,
with an ECHO action associated for each pattern. Several macros and variables are predefined by
lex. ECHO is a macro that writes code matched by the pattern. This is the default action for any
unmatched strings. Typically, ECHO is defined as:
#define ECHO fwrite(yytext, yyleng, 1, yyout)
Variable yytext is a pointer to the matched string (NULL-terminated) and yyleng is the length of
the matched string. Variable yyout is the output file and defaults to stdout. Function yywrap is
called by lex when input is exhausted. Return 1 if you are done or 0 if more processing is required.
Every C program requires a main function. In this case we simply call yylex that is the main entry-
point for lex. Some implementations of lex include copies of main and yywrap in a library thus
eliminating the need to code them explicitly. This is why our first example, the shortest lex
program, functioned properly.
Here is a program that does nothing at all. All input is matched but no action is associated with any
pattern so there will be no output.
%%
.
\n
The following example prepends line numbers to each line in a file. Some implementations of lex
predefine and calculate yylineno. The input file for lex is yyin and defaults to stdin.
%{
int yylineno;
%}
%%
^(.*)\n printf("%4d\t%s", ++yylineno, yytext);
%%
int main(int argc, char *argv[]) {
yyin = fopen(argv[1], "r");
yylex();
fclose(yyin);
}
The definitions section is composed of substitutions, code, and start states. Code in the definitions
section is simply copied as-is to the top of the generated C file and must be bracketed with “%{“
and “%}” markers. Substitutions simplify pattern-matching rules. For example, we may define
digits and letters:
digit [0-9]
letter [A-Za-z]
%{
int count;
%}
%%
/* match identifier */
{letter}({letter}|{digit})* count++;
%%
int main(void) {
yylex();
printf("number of identifiers = %d\n", count);
return 0;
}
Whitespace must separate the defining term and the associated expression. References to
substitutions in the rules section are surrounded by braces ({letter}) to distinguish them from
literals. When we have a match in the rules section the associated C code is executed. Here is a
scanner that counts the number of characters, words, and lines in a file (similar to Unix wc):
%{
int nchar, nword, nline;
%}
%%
\n { nline++; nchar++; }
[^ \t\n]+ { nword++, nchar += yyleng; }
. { nchar++; }
%%
int main(void) {
yylex();
printf("%d\t%d\t%d\n", nchar, nword, nline);
return 0;
}
FAQs:
What is LEX?
Oral/Review Questions:
What are lexemes?
What is pattern recognition?
What is lexical analysis?
What is LEX?
Pre-requisite:
Learning Objectives:
Learning Outcomes:
The students will be able to
Theory:
Lex recognizes regular expressions, whereas YACC recognizes entire grammar. Lex divides the
input stream into tokens, while YACC uses these tokens and groups them together logically.
%{
declaration section
%}
rules section
%%
user defined functions
Declaration section:
Here, the definition section is same as that of Lex, where we can define all tokens and include
header files. The declarations section is used to define the symbols used to define the target
language and their relationship with each other. In particular, much of the additional information
required to resolve ambiguities in the context-free grammar for the target language is provided here.
The rules section defines the context-free grammar to be accepted by the function Yacc generates,
and associates with those rules C-language actions and additional precedence information. The
grammar is described below, and a formal definition follows.
The rules section is comprised of one or more grammar rules. A grammar rule has the form:
A : BODY ;
The symbol A represents a non-terminal name, and BODY represents a sequence of zero or
more names, literals, and semantic actions that can then be followed by optional precedence rules.
Only the names and literals participate in the formation of the grammar; the semantic actions and
precedence rules are used in other ways. The colon and the semicolon are Yacc punctuation.
If there are several successive grammar rules with the same left-hand side, the vertical bar ’|’ can be
used to avoid rewriting the left-hand side; in this case the semicolon appears only after the last rule.
The BODY part can be empty.
Programs Section
The programs section can include the definition of the lexical analyzer yylex(), and any other
functions; for example, those used in the actions specified in the grammar rules. It is unspecified
whether the programs section precedes or follows the semantic actions in the output file; therefore,
if the application contains any macro definitions and declarations intended to apply to the code in
the semantic actions, it shall place them within "%{ ... %}" in the declarations section.
The yylex() function is an integer-valued function that returns a token number representing the
kind of token read. If there is a value associated with the token returned by yylex() (see the
discussion of tag above), it shall be assigned to the external variable yylval.
FAQs:
Which is the file generated as output as a result of YACC translation
How LEX communicates with YACC
Is there any other parser generator/ syntax analysis generator other than YACC?
Which phase is the driver of compiler?
How to debug YACC specification file?
Oral/Review Questions:
What is state machine?
Which phase is the driver of compiler?
How to debug YACC specification file?
Which is the file generated as output as a result of YACC translation
How LEX communicates with YACC
Are there any other parser generators/ syntax analysis generators other than YACC?
How YACC works?
Long form of YACC.
Why YACC is a compiler-compiler?
Group B-ASSIGNMENT NUMBER: 05
Revised On: 10/02/2018
TITLE YACC program to run syntactic analysis
Pre-requisite:
Learning Objectives:
Learning Outcomes:
The students will be able to
Create a grammar and match the incoming instructions to their respective syntaxes.
Theory:
Lex recognizes regular expressions, whereas YACC recognizes entire grammar. Lex divides the
input stream into tokens, while YACC uses these tokens and groups them together logically.
%{
declaration section
%}
rules section
%%
user defined functions
Declaration section:
Here, the definition section is same as that of Lex, where we can define all tokens and include
header files. The declarations section is used to define the symbols used to define the target
language and their relationship with each other. In particular, much of the additional information
required to resolve ambiguities in the context-free grammar for the target language is provided here.
The rules section defines the context-free grammar to be accepted by the function Yacc generates,
and associates with those rules C-language actions and additional precedence information. The
grammar is described below, and a formal definition follows.
The rules section is comprised of one or more grammar rules. A grammar rule has the form:
A : BODY ;
The symbol A represents a non-terminal name, and BODY represents a sequence of zero or
more names, literals, and semantic actions that can then be followed by optional precedence rules.
Only the names and literals participate in the formation of the grammar; the semantic actions and
precedence rules are used in other ways. The colon and the semicolon are Yacc punctuation.
If there are several successive grammar rules with the same left-hand side, the vertical bar ’|’ can be
used to avoid rewriting the left-hand side; in this case the semicolon appears only after the last rule.
The BODY part can be empty.
Programs Section
The programs section can include the definition of the lexical analyzer yylex(), and any other
functions; for example, those used in the actions specified in the grammar rules. It is unspecified
whether the programs section precedes or follows the semantic actions in the output file; therefore,
if the application contains any macro definitions and declarations intended to apply to the code in
the semantic actions, it shall place them within "%{ ... %}" in the declarations section.
The yylex() function is an integer-valued function that returns a token number representing the
kind of token read. If there is a value associated with the token returned by yylex() (see the
discussion of tag above), it shall be assigned to the external variable yylval.
Input descriptions:
The format of the grammar rules for Yacc is:
name : names and 'single character's
| alternatives
;
Yacc definitions:
%start
line means the whole input should match line
%union
lists all possible types for values associated with parts of the grammar and gives each
a field-name
%type gives an individual type for the values associated with each part of the grammar,
using the field-names from the %union declaration
%token
declare each grammar rule used by YACC that is recognised by LEX and give type of
value
Yacc does its work using parsing and hence it is also called a parser.
PARSING:
Parsing is the activity of checking whether a string of symbols is in the language of some grammar,
where this string is usually the stream of tokens produced by the lexical analyzer. If the string is in
the grammar, we want a parse tree, and if it is not, we hope for some kind of error message
explaining why not. There are two main kinds of parsers in use, named for the way they build the
parse trees:
Top-down: A top-down parser attempts to construct a tree from the root, applying productions
forward to expand non-terminals into strings of symbols.
Bottom-up: A Bottom-up parser builds the tree starting with the leaves, using productions in reverse
to identify strings of symbols that can be grouped together.
In both cases the construction of derivation is directed by scanning the input sequence from left to
right, one symbol at a time.
FAQs:
Which parser is used in YACC?
How to debug YACC specification file?
What is L-R parser?
Types of Parsers
Oral/Review Questions:
Why YACC is a compiler-compiler?
How to debug YACC specification file?
What is L-R parser?
Types of Parsers
Which parser is used in YACC?
What is LALR parser?
Top-down and bottom up parser.
What are LL and LR Parsers?
ASSIGNMENT NUMBER: C-01
Revised On: 17/12/2017
TITLE Scheduling algorithms.
Prerequisites:
Basic Operating System Functionalities.
Concept of Multitasking and Multiuser OS .
Learning Objectives:
Learning Outcomes:
THEORY:
Process scheduling: It is an activity process manager that handles the task of scanning process
from CPU and running of another process on basis of some strategy.Such OS allows more than one
to be loaded in executable memory.
Scheduler:
They are special system software that handle process scheduling in various ways:
Its main task is to select jobs to be submitted into system and to be divided and decide
which process to run.
Types of Scheduler:
1) Long-term scheduler
2)Short-term scheduler.
3)Medium term scheduler.
Arrival: The request arrives in the system hen user submits it to OS. When request arrives, the time
is called arrival time.
Scheduling: The pending request is scheduled for service when scheduler selects it for servicing.
Preemption: The process is preempted when CPU switches to another process before completing it
and this process is added to pending request.
Non preemption: The scheduled process is always completed before next scheduling of the process.
Completion: The process is completed and next process is selected for processing by CPU.
The CPU scheduling takes the information about arrival time, size of request in CPU seconds, CPU
time already consumed by the request, deadline of the process for scheduling policy.
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to
be allowed to utilize the CPU. The criteria for selection of an algorithm are
The maximum throughput
Least turnaround time.
Minimum waiting time.
Maximum CPU utilization.
Scheduling Policies:
FCFS Scheduling:
The process requests are schedules in the order of their arrival time. The pending requests are in a
queue. The first request in the queue is scheduled first. The request that comes is added to the end of
the queue.
P0 P1 P2 P3
0 5 8 16 22
Algorithm:
1) Input the processes along with burst times.
2)Input arrival time for all processes
3)Sort according to their arrival time along with indices.
4)Perform processes in sorted order
5)Stop.
est approach to minimize waiting time. It is easy to implement in batch systems where required
CPU time is known in advance.
a)Non-preemptive
P5 P2 P6 P2 P4 P1 P3
0 100 150 200 275 425 725 1125
Algorithm
Round Robin scheduling: Schedules using time slicing. The amount of CPU time a process may
use when allocated is limited. The process is preempted if the process requires more time or if
process requires I/O operation before the time slice. It makes weighted turnaround time
approximately equal all time but throughput may not be well as all processes are treated equally.
Time quantum=50
Average waiting time=225
P1 P2 P3 P4 P1 P2 P3 P1 P3
0 50 100 150 200 250 300 350 400 550
Algorithm:
1)Get input for processes with arrival time and burst time. Take quantum.
2)Sort processes according to arrival time.
3)Process till all processes are done:
4)End
It is non-preemptive algorithm and one of the common scheduling algorithm in batch system.
Each process is assigned a priority and process with highest priority is executed first and so on.
Processes with same priority are executed on FCFS basis.
P3 P1 P0 P2
0 6 9 14 22
Algorithm:
1)Get input for process including arrival time,burst time and priority.
2)Sort process according to arrival time.
3)If process have same arrival time,sort them by priority.
4)Print process according to index.
5)End
Steps to do /algorithm:
Oral/Review Questions:
PROBLEM Write a Java program to implement Banker‘s Algorithm
STATEMENT
/DEFINITION
OBJECTIVE To study the algorithm for finding out whether a system is in a safe
state.
To study the resource-request algorithm for deadlock avoidance.
INSTRUCTIONS Title
FOR Problem Definition
WRITING Objectives
JOURNAL Theory
Class Diagram/UML diagram
Test cases
Program Listing
Output
Conclusion
Aim: Write a Java program to implement Banker‘s Algorithm
Pre-requisite:
Basic conditions for deadlock and deadlock handling approaches.
Learning Objectives:
To study the algorithm for finding out whether a system is in a safe state.
To study the resource-request algorithm for deadlock avoidance.
To study and implement the Banker’s algorithm to avoid deadlock.
Learning Outcomes:
Theory:
Deadlock: A set of processes is in a deadlock state when every process in the set is waiting for
an event that can only cause by another process in the set. Examples of such processes are resources
acquisition and release.
As shown in the diagram process P1 is holding the resource R2 and requesting resource R1.
Process P2 is holding the resource R1 and requesting resource R2. So no process can proceed
further, indicating the deadlock.
Steps To Do/algorithm:
Input need the Claim matrix (C) and Allocation matrix (A) and Resource Vector ( R )
Calculate (C-A) and Available Vector V.
Test for safety condition of the system.
Decide on whether the resources have to be allocated or not.
Let Request[i] be the request vector for process P[i]. If Request[i,j] = k, then process P[i]
wants k instances of resource type R[j]. When a request for resources is made by process
P[i], the following action are taken:
If Request[i] <= Need[i], go to step 2. Otherwise, raise an error condition, since the process
has exceeded it’s maximum claim.
If Request[i] <= Available, go to step 3. Otherwise, P[i] must wait, since the resources are
not available.
Have the system pretend to have allocated the required resources to process P[i] by
modifying the state as follows:
If the resulting resource-allocation state is safe, the transaction is completed and process P[i]
is allocated it’s resources. However, if the new state is unsafe, then P[i] must wait for
Request[i] and the old resource-allocation state is restored.
Oral/Review Questions:
ASSIGNMENT NUMBER: C3
Revised On:
TITLE Study of UNIX system calls for process management.
PROBLEM Implement UNIX system calls like ps, fork, join, exec family, and wait
STATEMENT for process management (use shell script/ Java/ C programming).
/DEFINITION
Learning Outcomes:
Theory:
fork - create a child process
#include <sys/types.h>
#include <unistd.h>
pid_t fork(void);
fork() creates a new process by duplicating the calling process. The new process is referred to as
the child process. The calling process is referred to as the parent process.
The child process is an exact duplicate of the parent process except for the following points:
* The child has its own unique process ID, and this PID does not match the ID of any existing
process group or session.
* The child's parent process ID is the same as the parent's process ID.
RETURN VALUE
On success, the PID of the child process is returned in the parent, and 0 is returned in the
child. On failure, -1 is returned in the parent, no child process is created.
An exec call will load a new program into the process and replace the current running
program with the one specified. For example, consider this program, which will execute
the ls -l command in the current directory:
There are three main versions of exec which we will focus on:
execv(char * path, char * argv[]) : given the path to the program and an argument array,
load and execute the program
execvp(char * file, char * argv[]) : given a file(name) of the program and an argument
array, find the file in the environment PATHand execute the program
execvpe(char * file, char * argv[], char * envp[]) given a file(name), an argument array,
and the enviroment settings, within the enviroment, search the PATH for the program
named file and execute with the arguments.
The wait() system call is used by a parent process to wait for the status of the child to change.
A status change can occur for a number of reasons, the program stopped or continued, but we'll
only concern ourselves with the most common status change: the program terminated or
exited. (We will discuss stopped and continued in later lessons.)
System calls provide the interface between a process and the operating system. These system
calls are the routine services of the operating system.
Linux system call fork () creates a process Exec() ,join() etc.
Steps To Do/algorithm:
1. Study the various Linux process handling system calls.
2. Execute basic Linux commands.
3. Print the information about a process its task structure ids etc.
FAQs:
What is use of exe and wait command?
Explain multi-threading and threads related system calls in Linux.
Which command is used to get process ids?
How to kill some running process from command prompt?
Which system call is used to create new processes?
INSTRUCTIONS Title
FOR Problem Definition
WRITING Objectives
JOURNAL Theory
Class Diagram/UML diagram
Test cases
Program Listing
Output
Conclusion
Pre-requisite:
Basic functionalities of OS.
Memory management in OS.
Learning Objectives:
Implement various page replacement algorithms like FIFO, LRU and Optimal
Compare the page replacement algorithms based on hit ratio
Theory:
Whenever there is a page reference for which the page needed in not present memory, that event
is called page fault or page fetch or page failure situation. In such case we have to make space in
memory for this new page by replacing any existing page. But we cannot replace any page. We have
to replace a page which is not used currently. There are some algorithms based on them. We can
select appropriate page replacement policy. Designing appropriate algorithms to solve this problem
is an important task because disk I/O is expensive.
The oldest page in the physical memory is the one selected for replacement. Keep a list On a page fault,
the page at the head is removed and the new page added to the tail of the list
Example -
LRU(Least Recently Used): In this algorithm, the page that has not been used for longest period of
time is selected for replacement. Although LRU is theoretically realizable, it is not cheap. To fully
implement LRU, it is necessary to maintain a linked list of all pages in memory, with the most
recently used page at the front and the least recently used page at the rear. The difficulty is that the
list must be updated on every memory reference. Finding a page in the list, deleting it, and then
moving it to the front is a very time consuming operation, even in hardware (assuming that such
hardware could be built).
The Optimal Page Replacement Algorithm:
The algorithm has lowest page fault rate of all algorithm. This algorithm state that: Replace the
page which will not be used for longest period of time i.e future knowledge of reference string is
required. Often called Balady's Min Basic idea: Replace the page that will not be referenced for the
longest time.
Steps to do /algorithm:
4. Create s menu to select various page replacement algorithms
5. Take no of page frames and pages along with reference strings.
6. Calculate the number of page faults.
7. Perform a comparative assessment of best policy for given reference string.
FAQs:
Which page replacement algorithm gives minimum hit ratio?
Which data structures are used to implement FIFO policy?
Which data structures are used to implement optimal page replacement policy?
Oral/Review Questions:
What is page replacement? When does it needed?
Name the page replacement algorithms?
What is optimal page replacement policy?
Which policy assumes locality of references?
Which policy needs future knowledge about usage pattern?