SPOS Lab Manual 2023-24
SPOS Lab Manual 2023-24
Laboratory Practice – I
310248: System Programming & Operating System Laboratory
LABORATORY MANUAL
AY: 2023-24
LABORATORY PRACTICE – I
TE Computer Engineering
Semester –I
Subject Code - 310248
Prepared By:
CERTIFICATE
This is to certify that
Mr./Ms.
of Class Roll No. has completed all the practical work/term work in subject
Date:
Index
Sr. Page
Date Experiment Performed Sign Remark
No No
Group A (Any Two Assignments from Sr. No. 1 to 3)
1. Design suitable Data structures and implement Pass-I and Pass-II
of a two-pass assembler for pseudo-machine. Implementation
should consist of a few instructions from each category and few
assembler directives. The output of Pass-I (intermediate code file
and symbol table) should be input for Pass-II
3.
Write a program to recognize infix expressions using LEX and
YAAC.
Expt.
Title of Experiment
No.
Group A (Any Two Assignments from Sr. No. 1 to 3)
Design suitable Data structures and implement Pass-I and Pass-II of a two-pass assembler for pseudo-
machine. Implementation should consist of a few instructions from each category and few assembler
1
directives. The output of Pass-I (intermediate code file and symbol table) should be input for Pass-II.
Design suitable data structures and implement Pass-I and Pass-II of a two-pass macro-processor. The
output of Pass-I (MNT, MDT and intermediate code file without any macro definitions) should be
2
input for Pass-II.
Write a program to solve Classical Problems of Synchronization using Mutex and Semaphore.
4
Write a program to simulate CPU Scheduling Algorithms: FCFS, SJF (Preemptive), Priority (Non-
5 Preemptive) and Round Robin (Preemptive).
Write a program to simulate Memory placement strategies – best fit, first fit, next fit and worst fit.
6
Write a program to simulate Page replacement algorithm.
7
Reference Books :
1. Dhamdhere D., "Systems Programming and Operating Systems", McGraw Hill, ISBN 0 - 07
– 463579
2. Silberschatz, Galvin, Gagne, "Operating System Principles", 9th Edition, Wiley, ISBN 978- 1-
118- 06333-0
3. Bob Hughes, Mike Cotterell and Rajib Mall, “Software Project Management”, Sixth
Edition,Tata McGraw Hill, NewDelhi, 2017
4. Robert K. Wysocki, “Effective Software Project Management”, Wiley Publication, 2011
5. Maarten van Steen, Andrew S. Tanenbaum, “Distributed System”, Third edition, version 3
6. George Coulouris, Jean Dollimore, Tim Kindberg, “Distributed Systems Concepts
andDesign”, Fifth edition
ASSIGNMENT NO: 1 – Group A
Title of Assignment:
Design suitable Data structures and implement Pass-I and Pass-II of a two-pass assembler for pseudo-machine.
Implementation should consist of a few instructions from each category and few assembler directives. The output
of Pass-I (intermediate code file and symbol table) should be input for Pass-II.
Problem Statement:
Implement one pass-I of TWO Pass assembler with a hypothetical Instruction set using Java language.
Instruction set should include all types of assembly language statements such as Imperative, Declarative and
Assembler Directive. While designing stress should be given on
a) How efficiently Mnemonic opcode could be implemented so as to enable faster retrieval on op-code.
b) Implementation of symbol table for faster retrieval.
Objectives: learn how pass-1 and pass-2 data structure works in assembly programs input.
Theory: Assembler:
1. An assembler is a program that accepts as input an assembly language program and produces as
output its machine language equivalent i.e. It produces bit configurations of each mnemonic in the
assembly language as shown. This machine language information from the assembler is given to the
loader for further processing.
2. Note that in std. Cases Assembler not only produces this bit configuration; but also produces
information useful for loaders like- The externally defined symbols etc. are noted and these symbols
are passed on to the loader for further resolution of their addresses.
3. The main reason for the existence of assembler was to shift the burdens of calculating specific
addresses from programmer to computer.
Tasks performed by the pass2 assembler
Pass 1 :
1. Separate symbol, Mnemonic and Operand fields
2. Build the symbol table
3. Perform IC processing
4. Construct intermediate code
Pass 2:
1. Synthesis of target program
2. Evaluate fields and generate code
3. Process pseudo opcodes
Pass 1 of assembler:
1. Pass1 uses following data structures:
a. Symbol table (st).
b. Literal table(lt)
c. Machine opcode table(mot)
d. Location counter
e. Copy of source program
INTERMEDIATE CODE
Intermediate Code
(AD, 00) (C, 200)
(IS, 09) (S, 00)
(IS, 09) (S, 01)
(IS, 04) (0) (L, 00)
(IS, 04) (0) (S, 00)
(IS, 01) (0) (S, 01)
(IS, 02) (0) (L, 01)
(IS, 05) (0) (S, 02)
(IS, 10) (S, 02)
(AD, 04)
(IS, 04) (0) (L, 02)
(IS, 04) (0) (S, 00)
(IS, 01) (0) (S, 01)
(IS, 02) (0) (L, 03)
(IS, 08) (0) (L, 04)
(IS, 05) (0) (S, 02)
(DL, 01) (C, 01)
(DL, 01) (C, 01)
(DL, 01) (C, 01)
(IS, 00)
(AD, 01)
Conclusion: Thus we are able to Translate suitable Data structures and implement Pass-I of a two-pass
assembler for pseudo-machine and which is mapped with CO1.
Questions:
9. What feature of assembly language makes it mandatory to design a two pass assembler?
10. How are literals handled in an assembler?
11. How assembler directives are handled in pass I of assembler?
ASSIGNMENT NO: 2 – Group A
Title of Assignment:
Design suitable data structures and implement Pass-I and Pass-II of a two-pass macro-processor. The output of
Pass-I (MNT, MDT and intermediate code file without any macro definitions) should be input for Pass-II.
Problem Statement:
Implement pass-II of TWO Pass assembler with hypothetical Instruction set using Java language.
Instruction set should include all types of assembly language statements such as Imperative, Declarative and
Assembler Directive. While designing stress should be given on
a) How efficiently the Mnemonic opcode table could be implemented so as to enable faster retrieval on op-
code.
b) Implementation of symbol table, pool tables for faster retrieval.
Objectives: learn how pass-1 and pass-2 data structure works in assembly programs input.
Theory: Assembler:
ALGORITHM:
1. Open and read the first line from the intermediate file.
2. If the first line contains the opcode ―START‖, then write the label, opcode and operand field values of
the corresponding statement directly to the final output file.
3. Do the following steps, until an ―END‖ statement is reached.
3.1 Start writing the location counter, opcode and operand fields of the corresponding statement to
the output file, along with the object code.
3.2 If there is no symbol/label in the operand field, then the operand address is assigned as zero and
it is assembled with the object code of the instruction.
3.3 If the opcode is BYTE, WORD, RESB etc convert the constants to the object code.
4. Close the files and exit.
During the 2nd Pass, translation from source language to machine language takes place. Instruction
addresses and label addresses are used from the symbol table instead of their names.
Compiler does not know where the program will be executed in the memory so the compiler
generates logical addresses instead of absolute addresses. Loader also uses the Relocation Constant to solve
the problem of relocation. External Referencing problem is resolved by the linker during compilation.
Linker connects the object program to the code for standard library functions.
● It builds a table of incomplete instructions (TII) to record information about instructions whose
operand fields are left blank.
● Each entry in this table contains a pair of the form (instruction address, symbol) to indicate that the
address of the symbol should be put in the operand field of the instruction with the address
instruction address.
● By the time the END statement is processed, the symbol table would contain the addresses of all
symbols defined in the source program and TII would contain information describing all forward
references.
● The assembler can now process each entry in TII to complete the concerned instruction.
● Alternatively, entries in TII can process on the fly during normal processing.
● In this approach, all forward references to a symbol i would be processed when the statement that
defines the symbol is encountered. also …
● The instruction corresponding to the statement
MOVER BREG, ONE contains a forward reference to ONE.
● Hence the assembler leaves the second operand field blank in the instruction that assembled to reside
in location 101 of memory and makes an entry (101, ONE) in the table of incomplete instructions
(TII).
● While processing the statement
ONE DC ‗1‘address of ONE, which 115, entered in the symbol table.
● After the END statement is processed, the entry (101, ONE) would be processed by obtaining the
address of ONE from the symbol table and inserting it in the second operand field of the instruction
with assembled address 101.
Pass II Algorithm
It has been assumed that the target code is to be assembled in the named code_area.
1. code_area_address := address of
code_area; Pooltab_ptr :=1;
Loc_cntr:=0;
2. While next statement is not an END statement
(a) Clear machine_code_buffer;
(b) If an LTORG statement
(i) Process literals in LITTAB[POOLTAB[pooltab_ptr]]…
LITTAB[POOLTAB[pooltab_ptr+1]]-1 similar to processing of constants in a DC statement
i.e. assemble the literals in machine_code_buffer.
(ii) size := size of memory area required for literals;
(iii) pooltab_ptr:= pooltab_ptr +1;
(c) If a START or ORIGIN statement then
(i) loc_cntr := value specified in operand field;
(ii) size:=0;
(d) If a declaration statement
(i) If a DC statement then
Assemble the constant in machine_code_buffer.
(ii) size: = size of memory area required by DC/DS;
(e) If an imperative statement
(i) Get the operand address from SYMTAB or LITTAB.
(ii) Assemble instruction in machine_code_buffer.
(iii) size: = size of instruction;
(f) if size not equal to 0 then
(i) Move contents of Machine_code_buffer to the address
code_area_address + loc_cntr;
(ii) loc_cntr := loc_cntr + size;
3. (Processing of END statement)
(a) Perform steps 2(b) and 2(f).
(b) Write code_area into output file.
Example:
SAMPLE PROGRAM Input
START 200
READ A
READ B
MOVER AREG, ='5'
MOVER AREG, A
ADD AREG, B SUB
AREG, ='6'
MOVEM AREG, C
PRINT C
LTORG
MOVER AREG, ='15'
MOVER AREG, A
ADD AREG, B
SUB AREG, ='16'
DIV AREG, ='26'
MOVEM AREG, C
A DS 1
B DS 1
C DS 1
STOP
END Symbol Table
A 216 1 10 1
B 217 1 10 1
C 218 1 10 1
Total Errors: 0
Total Warnings: 0
Literal Table
Lit# Lit Addr
00 ='5' 208
01 ='6' 209
02 ='15' 220
03 ='16' 221
04 ='26' 222
Pool Table
Pool# Pool
Base
00 0
01 2
INTERMEDIATE CODE
Intermediate Code
(AD, 00) (C, 200)
(IS, 09) (S, 00)
(IS, 09) (S, 01)
(IS, 04) (0) (L, 00)
(IS, 04) (0) (S, 00)
(IS, 01) (0) (S, 01)
(IS, 02) (0) (L, 01)
(IS, 05) (0) (S, 02)
(IS, 10) (S, 02)
(AD, 04)
(IS, 04) (0) (L, 02)
(IS, 04) (0) (S, 00)
(IS, 01) (0) (S, 01)
(IS, 02) (0) (L, 03)
(IS, 08) (0) (L, 04)
(IS, 05) (0) (S, 02)
(DL, 01) (C, 01)
(DL, 01) (C, 01)
(DL, 01) (C, 01)
(IS, 00)
(AD, 01)
PASS II OUTPUT
Target Code
200) + 09 0 216
201) + 09 0 217
202) + 04 0 208
203) + 04 0 216
204) + 01 0 217
205) + 02 0 209
206) + 05 0 218
207) + 10 0 218
208) + 00 0 005
209) + 00 0 006
210) + 04 0 220
211) + 04 0 216
212) + 01 0 217
213) + 02 0 221
214) + 08 0 222
215) + 05 0 218
216)
217)
218)
219) + 00 0 000
220) + 00 0 015
221) + 00 0 016
222) + 00 0 026
Conclusion: Thus we are able to Translate suitable Data structures and implement Pass-II of a two-pass
assembler for pseudo-machine and which is mapped with CO1.
Oral Questions:
Title of Assignment:
Write a program to recognize infix expressions using LEX and YAAC.
Problem Statement: Write a program in C for a pass-II of two pass macro processor for Implementation of
Macro Processor. Following cases to be considered
a) Macro without any parameters
b) Macro with Positional Parameters
c) Macro with Keyword parameters
d) Macro with positional and keyword parameters.
(Conditional expansion, nested macro implementation not expected)
Theory: Macro Processor: A macro processor is a program that copies a stream of text from one place to
another, making a systematic set of replacements as it does so. Macro processors are often embedded in
other programs, such as assemblers and compilers. Sometimes they are standalone programs that can be
used to process any kind of text. ―A macro processor is a program that reads a file (or files) and scans them
for certain keywords. When a keyword is found, it is replaced by some text. The keyword/text combination
is called a macro.‖
Two new assembler directives used in macro definition are MACRO and MEND. MACRO:
identify the beginning of a macro definition
MEND: identify the end of a macro definition
Prototype for the macro Each parameter begins with
‗&‘
0 A N1
1 B N2
2 REG BREG
Contents of test.ini
START 100
READ N1
READ N2
+MOVER CREG,N1
+ADD CREG,N2
+MOVEM CREG,N1
+MOVER BREG,N1
+SUB BREG,N2
+MOVEM BREG,N1
STOP
N1 DS 1
N2 DS 1
END
Conclusion: Thus we are able to use LEX and YACC Tool which is mapped with CO2.
ASSIGNMENT NO: 04 – Group B
Assignment Title: Implement a program to solve Classical Problems of Synchronization using Mutex and
Semaphore
Problem Statement: Write a program to solve Classical Problems of Synchronization using Mutex and
Semaphore
Objectives:
1. To understand the concept of semaphore, critical section.
2. Learn implementation of Semaphore.
Theory:
There are a number of processes that only read the data area (readers) and a number that only write to the
data area (writers).
The conditions that must be satisfied are
o Any number of readers may simultaneously read the file.
o Only one write at a time may write to the file.
o If a writer is writing to the file, no reader may read it.
The data area could be a file, a block of main memory, or even a bank of processor registers.
There is a data area shared among a number of processor registers
● Binary Semaphore –
This is also known as mutex lock. It can have only two values – 0 and 1. Its value is initialized to
1. It is used to implement the solution of critical section problems with multiple processes.
● Counting Semaphore –
Its value can range over an unrestricted domain. It is used to control access to a resource that has
multiple instances.
P operation is also called wait, sleep, or down operation, and V operation is also called signal,
wake-up, or up operation.
Both operations are atomic and semaphore(s) is always initialized to one. Here atomic means that
variable on which read, modify and update happens at the same time/moment with no pre-emption
i.e. in-between read, modify and update no other operation is performed that may change the
variable.
A critical section is surrounded by both operations to implement process synchronization. See the
below image. The critical section of Process P is in between P and V operation.
Conclusion: Thus Students are able to Implement a program to solve Classical Problems of
Synchronization using Mutex and Semaphore which is mapped to CO3.
QUESTIONS:
2. What is synchronization?
Problem Statement: Write a Java program (using OOP features) to implement following scheduling
algorithms: FCFS, SJF (Preemptive), Priority (Non-Preemptive) .
Objectives:
1. To understand concept of scheduling.
2. To learn and use scheduling algorithms.
Theory:
CPU scheduling is the basis of multi programmed operating systems. By switching the CPU among
processes, the operating system can make the computer more productive. The objective of
multiprogramming is to have some process running at all times, in order to maximize CPU utilization. In a
uniprocessor system, only one process may run at a time; any other processes must wait until the CPU is
free and can be rescheduled.
The idea of multiprogramming is relatively simple. A process is executed until it must wait, typically
for the completion of some I/O request. In a simple computer system, the CPU would then sit idle; all this
waiting time is wasted. With multiprogramming, the time is to be used productively. Several processes are
kept in memory at one time. When one process has to wait, the operating system takes the CPU away from
that process and gives the CPU to another process. This pattern continues.
Scheduling is a fundamental operating system function. Almost all computer resources are scheduled
before use. The CPU is one of the primary computer resources. Thus, its scheduling is central to operating
system design.
Preemptive Scheduling
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state (for example, I/O request, or
invocation of wait for the termination of one of the child processes)
2. When a process switches from the running state to the ready state (for example, when an interrupt
occurs)
3. When a process switches from the waiting state to the ready state( for example, completion of I/O)
4. When a process terminates
In circumstances 1 and 4, there is no choice in terms of scheduling. A new process must be selected for
execution. There is a choice in circumstances 2 and 3.
When the scheduling takes place only under circumstances 1 and 4, the scheduling is non-preemptive;
otherwise, the scheduling scheme is preemptive. Under non-preemptive scheduling, once the CPU has been
allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by
switching to the waiting state. This scheduling method is used by the Microsoft Windows 3.1 and by the
Apple Macintosh operating systems. It is the only method that can be used on certain hardware platforms,
because it does not require the special hardware needed for preemptive scheduling.
Preemptive scheduling incurs a cost. Consider the case of two processes sharing data. One may be in
the midst of updating the data when it is preempted and the second process is run. The second process may
try to read the data, which are currently in an inconsistent state. New mechanisms are thus needed to
coordinate access to shared date.
Preemption also has an effect on the design of the operating system kernel. During the processing of
a system call, the kernel may be busy with an activity on behalf of a process. Such activities may involve
changing important kernel data. If the process is preempted in the middle of these changes, and the kernel
needs to read or modify the structure, chaos could ensue. Some operating systems, including most version of
UNIX, deal with this problem by waiting either for a system call to complete, or for an I/O block to take
place, before doing a context switch. This scheme ensures that the kernel structure is simple, since the
kernel will not preempt a process while the kernel data structures are in an inconsistent state. Unfortunately,
this kernel execution model is a poor one for supporting real-time computing and multiprocessing.
First-Come, First-Served Scheduling
The simplest CPU scheduling algorithm is the first-come, first-served (FCFS) scheduling algorithm.
With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of
the FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is
linked onto the tail of the queue. The running process is then removed from the queue. The code for FCFS
scheduling is simple to write and understand.
The average waiting time under the FCFS policy, is often quite long. Consider the following set of
processes that arrive at time 0, with the length of the CPU burst time given in milliseconds.
Process Burst Time
P1 24
P2 3
P3 3
If the processes arrive in the order P1,P2,P3, and are served in FCFS order, the result is obtained as shown in
the Gantt chart:
P1 P2 P3
0 24 27 30
The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2, and 27 milliseconds
for process P3. Thus, the average waiting time is (0+24+27)/3=17 milliseconds. If the processes arrive in the
order P2,P3,P1, the results will be as shown in the following Gantt chart:
P2 P3 P1
0 3 6 30
The average waiting time is now (0+3+6)/3=3 milliseconds. This reduction is substantial. Thus, the
average waiting time under FCFS policy is generally not minimal, and may vary substantially if the process
CPU burst times vary greatly.
In addition, consider the performance of FCFS scheduling in a dynamic situation. Assume only one
CPU bound process and many I/O bound processes are there. As the processes flow around the system, the
following scenario may result. The CPU bound process will get the CPU and hold it. During this time, all
the other processes will finish their I/O and move into the ready queue, the I/O devices are idle. Eventually,
the CPU bound process finishes its CPU burst and moves to an I/O device. All the I/O bound processes,
which have very short CPU bursts, execute quickly and move back to the I/O queues. At this point, the CPU
sits idle. The CPU bound process will then move back to the ready queue and be allocated to the CPU.
Again, all the I/O processes end up waiting in the ready queue until the CPU bound process is done. There
is a convoy effect, as all the other processes wait for the one big process to get off the CPU. This effect
results in lower CPU and device utilization than might be possible if the shorter processes were allowed to
go first.
Shortest-Job-First Scheduling
A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling algorithm. This
algorithm associates with each process the length of the latter‘s next CPU burst. When the CPU is available,
it is assigned to the process that has the smallest next CPU burst. If two processes have the same length next
CPU burst, FCFS scheduling is used to break the tie. The appropriate term would be the shortest next CPU
burst, because the scheduling is done by examining the length of the next CPU burst of a process, rather
than its total length.
Consider the following set of processes, with length of the CPU burst time given in milliseconds as an
example:
Process Burst Time
P1 6
P2 8
P3 7
P4 3
Using SJF scheduling, these processes are scheduled according to the following Gantt chart:
P4 P1 P3 P2
0 3 9 16 24
The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for
process P3, and 0 milliseconds for process P4. Thus, the average waiting time is (3+16+9+0)/4=7
milliseconds. If it was FCFS scheduling scheme, then the waiting time would have been 10.25
milliseconds.
The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time
for a given set of processes. By moving a short process before a long one, the waiting time of the short
process decreases more than it increases the waiting time of the long process. Consequently, the average
waiting time decreases.
The real difficulty with the SJF algorithm is to know the length of the next CPU request. For long
term (or job) scheduling in a batch system, we can use as the length the process time limit that a user
specifies when he submits the job. Thus, users are motivated to estimate the process time limit accurately,
since a lower value may mean faster response. SJF scheduling is used frequently in long-term scheduling.
Although the SJF algorithm is optimal, it cannot be implemented at the level of short term CPU
scheduling. There is no way to know the length of the next CPU burst. One approach is to try to
approximate SJF scheduling. The length of the next CPU burst may not be known, but it can be predicted. It
is expected that the next CPU burst will be similar in length to the previous ones. Thus, by computing an
approximation of the length of the next CPU burst, the process with the shortest predicted CPU burst can be
picked.
The next CPU burst is generally predicted as an exponential average of the measured lengths of
previous CPU bursts. Let tn be the length of the nth CPU burst, and let Tn+1 be the predicted value for the
next CPU burst. Then, for α, 0 ≤ α ≤ 1, define
Tn+1= α tn+(1- α)Tn
This formula defines an exponential average. The value of tn contains most recent information; Tn
stores the past history in the prediction. The parameter α controls the relative weight of recent and past
history in the prediction. If α=0, then Tn+1=tn, and only the most recent CPU burst matters.
More commonly, α=1/2, so recent history and past history are equally weighted. The initial T0 can be
defined as a constant or as an overall system average. Figure 6.1 shows an exponential average with α=1/2
and T0=10.To understand the behavior of the exponential average, the formula for Tn+1 can be expanded by
substituting for Tn, to find
Tn+1= α tn+ (1- α) α tn-1+…+ (1- α)j α tn-j+…+ (1- α)n+1T0
Since both α and (1- α) are less than or equal to 1, each successive term has less weight than its predecessor.
The SJF algorithm may be either preemptive or non-preemptive. The choice arises when a new process
arrives at the ready queue while a previous process is executing. The new process may have a shorter next
CPU burst than what is left of the currently executing process. A preemptive SJF algorithm will preempt the
currently executing process, whereas a non-preemptive SJF algorithm will allow the currently running
process to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest-remaining-
time-first scheduling.
Consider the following four processes, with length of the CPU burst time given in milliseconds:
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
If the processes arrive at the ready queue at the times shown and need the indicated burst times, then
the resulting preemptive SJF schedule is as depicted in the following Gantt chart:
0 1 5 10 17 26
P1 P2 P4 P1 P3
Process P1 is started at time 0, since it is the only process in the queue. Process P 2 arrives at time 1.
The remaining time for process P1 (7 milliseconds) is larger than the time required by process P2 (4
milliseconds), so process P1 is preempted, and process P2 is scheduled. the average waiting time for this
example is ((10-1)+(1-1)+(17-2)+(5-3))/4=26/4=6.5 milliseconds. A non-preemptive SJF scheduling would
result in an average waiting time of 7.75 milliseconds.
Priority Scheduling
The SJF algorithm is a special case of the general priority-scheduling algorithm. A priority is
associated with each process, and the CPU is allocated to the process with the highest priority. Equal-
priority processes are scheduled in FCFS order.
An SJF algorithm is simply a priority algorithm where the priority (p) is the inverse of the (predicted)
next CPU burst. The larger the CPU burst, the lower the priority, and vice versa.
Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to 4, 095. However, there is
no general agreement on whether 0 is the highest or lowest priority. Some systems use low numbers to
represent low priority; others use low numbers for high priority. This difference can lead to confusion. In
this description, low priority numbers are used to represent high priority.
Consider the following set of processes, assumed to have arrived at time 0, in the order P 1, P2, …, P5,
with the length of the CPU burst time given in milliseconds as an example:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Using priority scheduling, these processes are scheduled according to the following Gantt chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19
The average waiting time is 8.2 milliseconds.
Priorities can be defined either internally or externally. Internally defined priorities use some
measurable quantity or quantities to compute the priority of a process. For example, time limits, memory
requirements, the number of open files, and the ratio of average I/O burst to average CPU burst have been
used in computing priorities. External priorities are set by criteria that are external to the operating system,
such as the importance of the process, the type and amount of funds being paid for computer use, the
department sponsoring the work, and other, often political factors.
Priority scheduling can be either preemptive or non-preemptive. When a process arrives at the ready
queue, its priority is compared with priority of the currently running process. A preemptive priority
scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the
priority of the currently running process. A non-preemptive priority scheduling algorithm will simply put the
new process at the head of the ready queue.
quantum expires. The CPU is then given to the next process, process P3.Once each process has received 1
time quantum, the CPU is returned to process P1 for an additional time quantum. The resulting RR schedule
is
0 4 7 10 14 18 22 26 30
P1 P2 P3 P1 P1 P1 P1 P1
Conclusion: Thus we are able to Implement internals and functionalities of Operating System and also we are
able to Perform CPU Scheduling with help of different Algorithm and its is Mapped with CO3
QUESTIONS:
1. Scheduling? List types of scheduler & scheduling.
2. List and define scheduling criteria.
3. Define preemption & non-preemption.
4. State FCFS, SJF, Priority & Round Robin scheduling.
5. Compare FCFS, SJF, RR, Priority
ASSIGNMENT NO: 06 – Group B
Assignment Title: Implement a program to simulate Memory placement strategies – best fit, first fit, next fit
and worst fit.
Problem Statement: Write a program to simulate Memory placement strategies – best fit, first fit, next fit and
worst fit.
Objectives:
1. To understand concept of memory placement technique.
2. To implement concept of memory placement technique.
Theory:
First Fit:-
o Advantages
● It is the fastest search as it searches only the first block i.e. enough to assign a process.
● It may have problems of not allowing processes to take space even if it was possible to allocate.
Consider the above example; process number 4 (of size 426) does not get memory. However it
was possible to allocate memory if we had allocated using best fit allocation [block number 4 (of
size 300) to process 1, block number 2 to process 2, block number 3 to process 3 and block
number 5 to process 4].
● Implementation:
● Input memory blocks with size and processes with size.
● Initialize all memory blocks as free.
● Start by picking each process and check if it can, be assigned to current block.
● If size-of-process <= size-of-block if yes then assign and check for next process.
● If not then keep checking the further blocks.
Next Fit
Next fit is a modified version of ‗first fit‘. It begins as the first fit to find a free partition but when called
next time it starts searching from where it left off, not from the beginning. This policy makes use of a roving
pointer. The pointer moves along the memory chain to search for a next fit. This helps in, to avoid the usage
of memory always from the head (beginning) of the free block chain.
● Advantages:-
First fit is a straight and fast algorithm, but tends to cut large portion of free parts into small pieces due
to which, processes that need a large portion of memory block would not get anything even if the sum of
all small pieces is greater than it required which is so-called external fragmentation problem.
Another problem of the first fit is that it tends to allocate memory parts at the beginning of the memory,
which may lead to more internal fragments at the beginning. Next fit tries to address this problem by
starting the search for the free portion of parts not from the start of the memory, but from where it ends
last time.
Next fit is a very fast searching algorithm and is also comparatively faster than First Fit and Best Fit
Memory Management Algorithms.
● Implementation
Input the number of memory blocks and their sizes and initializes all the blocks as free.
● Input the number of processes and their sizes.
● Start by picking each process and check if it can be assigned to the current block, if yes, allocate it
the required memory and check for next process but from the block where we left not from starting.
● If the current block size is smaller then keep checking the further blocks.
Worst Fit
Allocates a process to the partition which is largest sufficient among the freely available partitions
available in the main memory. If a large process comes at a later stage, then memory will not have space
to accommodate it.
o Implementation:
● Input memory blocks and processes with sizes.
● Initialize all memory blocks as free.
● Start by picking each process and find the maximum block size that can be assigned to current
process i.e., find max(bockSize[1], blockSize[2],...... blockSize[n]) > processSize[current], if
found then assign it to the current process.
● If not then leave that process and keep checking the further processes.
Conclusion: Thus we are able to understand Processes and files allocated to free blocks. List of
processes and files which are not allocated memory. The remaining free space list was left out after
performing allocation so we Utilize that Free Space this is Mapped With CO3
QUESTIONS:
1. Explain the best fit .
2. Explain the worst fit.
3. Explain the next fit.
ASSIGNMENT NO: 07 – Group B
Assignment Title: Implementation of a concept called Paging, using simulation.
Problem Statement: Write a Java Program (using OOP features) to implement paging simulation using
4. Least Recently Used (LRU)
5. Optimal algorithm
1.2 Objectives:
1. To understand concept of paging.
2. To learn and paging techniques.
1.3 Theory: Paging: Paging is a memory management scheme that eliminates the need for contiguous
allocation of physical memory. This scheme permits the physical address space of a process to be non –
contiguous.
● Logical Address or Virtual Address (represented in bits): An address generated by the CPU
● Logical Address Space or Virtual Address Space( represented in words or bytes): The set of all
logical addresses generated by a program
● Physical Address (represented in bits): An address actually available on memory unit
● Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses
Example:
● If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G words (1 G = 230)
● If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27 bits
● If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M words (1 M = 220)
● If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device and this mapping is known as paging technique.
● The Physical Address Space is conceptually divided into a number of fixed-size blocks, called
frames.
● The Logical address Space is also splitted into fixed-size blocks, called pages.
● Page Size = Frame
example:
● Page number(p): Number of bits required to represent the pages in Logical Address Space or Page
number
● Page offset(d): Number of bits required to represent particular word in a page or page size of
Logical Address Space or word number of a page or page offset.
● Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number.
● Frame offset(d): Number of bits required to represent particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.
The hardware implementation of page table can be done by using dedicated registers. But the usage of
register for the page table is satisfactory only if page table is small. If page table contain large number of
entries then we can use TLB(translation Look-aside buffer), a special, small, fast look up hardware cache.
In a operating systems that use paging for memory management, page replacement algorithm are needed to
decide which page needed to be replaced when new page comes in. Whenever a new page is referred and
not present in memory, page fault occurs and Operating System replaces one of the existing pages with
newly needed page. Different page replacement algorithms suggest different ways to decide which page to
replace. The target for all algorithms is to reduce number of page faults.
Page Fault – A page fault is a type of interrupt, raised by the hardware when a running program accesses a
memory page that is mapped into the virtual address space, but not loaded in physical memory.
1. Least Recently Used:–In this algorithm page will be replaced which is least recently used.
Algorithm:
Let capacity be the number of pages that memory can hold. Let set be the current set of pages in memory.
Start traversing the pages.
i) If set holds less pages than capacity.
a) Insert page into the set one by one until the size ofset reaches capacity or all page requests are
processed.
b) Simultaneously maintain the recent occurred index of each page in a map called indexes.
c) Increment page fault
ii) Else
If current page is present in set, do nothing.
Else
a) Find the page in the set that was least recently used. We find it using index
array. We basically need to replace the page withminimum index.
b) Replace the found page with current page.
c) Increment page faults.
d) Update index of current page.
2. Return page faults.
Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page slots
empty. Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page
faults 0 is already their so —>0 Page fault.
When 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Example-, Let‘s have a reference string: a, b, c, d, c, a, d, b, e, b, a, b, c, d and the size of the frame be 4.
2. Optimal Page replacement: – In this algorithm, pages are replaced which are not used for the longest
duration of time in the future. Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —>0 Page fault.
When 3 came it will take the place of 7 because it‘s not used for longest duration in future. ->1 Page fault.
0 is already there so —> 0 Page fault..
4 will take place of 1 —> 1 Page
Fault.
Now for the further page reference string —> 0 Page fault because they are already available in the
memory.
Example-2, Let‘s have a reference string: a, b, c, d, c, a, d, b, e, b, a, b, c, d and the size of the frame be 4.
Optimal page replacement is perfect, but not possible in practice as operating system cannot know future
requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement
algorithms can be analyzed against it.
Conclusion: Thus, we have studied various paging techniques. It is Mapped with CO2.
Oral Questions: