SS-Lab Manual (2018-22) - Student
SS-Lab Manual (2018-22) - Student
SS-Lab Manual (2018-22) - Student
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
(AY:2020-21)
LAB MANUAL
Version number: 1.3
Prepared by:
SYLLABUS
software.
Expected Outcome
The students will be able to
i. Compare and analyze CPU Scheduling Algorithms like FCFS, Round Robin, SJF, and
Priority.
ii. Implement basic memory management schemes likepaging.
iii. Implement synchronization techniques using semaphoresetc.
iv. Implement banker‘s algorithm for deadlockavoidance.
v. Implement memory management schemes and page replacement schemes and file allocation
and organizationtechniques.
vi. Implement system software such as loaders, assemblers and macroprocessor.
Part B
10. Implement the symbol table functions: create, insert, modify, search, anddisplay.
11. Implement pass one of a two pass assembler.*
12. Implement pass two of a two pass assembler.*
13. Implement a single pass assembler.*
14. Implement a two pass macro processor *
15. Implement a single pass macro processor.
16. Implement an absoluteloader.
17. Implement a relocatingloader.
18. Implement pass one of a direct-linkingloader.
19. Implement pass two of a direct-linkingloader.
20. Implement a simple text editor with features like insertion / deletion of a character, word,and
sentence.
21. Implement a symbol table with suitable hashing.*
TABLE OF CONTENTS
EXPERIMENT 1
AIM
DESCRIPTION
The Dining Philosopher Problem – The dining philosopher‘s problem is a classic synchronization
problem which is used to evaluate situations where there is a need of allocating multiple resources to
multiple processes. It assumes that 5 philosophers (represents processes) are seated around a circular
table with one chopstick (represents a resource) between each pair of philosophers. A philosopher
spends her life thinking and eating. In the center of the table is a bowl of noodles. When a
philosopher thinks, she does not interact with her colleagues. From time to time, a philosopher gets
hungry and tries to pick up the chopsticks that are between her and her left and right neighbours. A
philosopher may pick up only one chopstick at a time. A philosopher may eat if she can pick up the
two chopsticks adjacent to her. She eats without releasing the chopsticks. When she is finished eating,
she puts down both chopsticks and starts thinkingagain.
Semaphore Solution to Dining Philosopher – There are three states of philosopher : THINKING,
HUNGRY and EATING. Here there are two semaphores :Mutex and a semaphore array for the
philosophers. Mutex is used such that no two philosophers may access the pickup or putdown at the
same time. The array is used to control the behavior of each philosopher. But, semaphores can result
in deadlock due to programming errors.
ALGORITHM
1. START
2. Declare the number ofphilosophers.
3. Declare a semaphore mutex to acquire and release lock on eachchopstick.
4. Declare one semaphore perchopstick.
OUTPUT
Philosopher 1 is thinking
Philosopher 2 is thinking
Philosopher 3 is thinking
Philosopher 4 is thinking
Philosopher 5 is thinking
Philosopher 2 is Hungry
Philosopher 1 is Hungry
Philosopher 3 is Hungry
Philosopher 4 is Hungry
Philosopher 5 is Hungry
Philosopher 5 takes fork 4 and 5
Philosopher 5 is Eating
Philosopher 5 putting fork 4 and 5 down
Philosopher 5 is thinking
Philosopher 4 takes fork 3 and 4
Philosopher 4 is Eating
Philosopher 1 takes fork 5 and 1
Philosopher 1 is Eating
Philosopher 4 putting fork 3 and 4 down
Philosopher 4 is thinking
Philosopher 3 takes fork 2 and 3
Philosopher 3 is Eating
Philosopher 1 putting fork 5 and 1 down
Philosopher 1 is thinking
Philosopher 5 is Hungry
Philosopher 5 takes fork 4 and 5
Philosopher 5 is Eating
Philosopher 3 putting fork 2 and 3 down
Philosopher 3 is thinking
Philosopher 2 takes fork 1 and 2
Philosopher 2 is Eating
Philosopher 1 is Hungry
RESULT
EXPERIMENT 2
AIM
DESCRIPTION
In computing, the producer–consumer problem (also known as the bounded-buffer problem) is a
classic example of a multi-process synchronization problem. The problem describes two processes,
the producer and the consumer, who share a common, fixed-size buffer used as a queue. The
producer's job is to generate data, put it into the buffer, and start again. At the same time, the
consumer is consuming the data (i.e., removing it from the buffer), one piece at a time. The problem
is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer
won't try to remove data from an emptybuffer.
The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time
the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer
again. In the same way, the consumer can go to sleep if it finds the buffer empty. The next time the
producer puts data into the buffer, it wakes up the sleeping consumer. The solution can be reached by
means of inter-process communication, typically using semaphores.
A semaphore S is an integer variable that can be accessed only through two standard operations:
wait() and signal(). The wait() operation reduces the value of semaphore by 1 and the signal()
operation increases its value by 1. In this solution, we use one binary semaphore – mutex and two
counting semaphores – full and empty. ―Mutex‖ is for acquiring and releasing the lock on shared
buffer. ―Full‖ keeps track of number of items in the buffer at any given time and ―empty‖ keeps track
of number of unoccupied slots.
Initialization of semaphores
mutex = 1
full = 0 // Initially, all slots are empty. Thus full slots are 0
empty = n // All slots are empty initially
AGORITHM:
1. Declare 3 semaphores mutex, full,empty.
2. Initialize mutex to 1, full to 0 and empty ton.
OUTPUT
RESULT
EXPERIMENT 3
AIM
To simulate the following CPU Scheduling techniques:
a) First Come First Serve(FCFS)
b) Shortest Job First(SJF)
c) Round Robinscheduling
d) Priorityscheduling
DESCRIPTION
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be
allocated the CPU.
1. FCFS
First in, first out (FIFO), also known as first come, first served (FCFS), is the simplest
scheduling algorithm. FIFO simply queues processes in the order that they arrive in the ready
queue. In this, the process that comes first will be executed first and next process starts only after
the previous gets fully executed. Here we are considering that arrival time for all processes is0.
Turn Around Time (Total time): Time difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time: Time difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time (Service time)
2. SJF
Shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting process
with the smallest execution time to execute next. SJN is a non-preemptive algorithm. It has the
advantage of having minimum average waiting time among all scheduling algorithms. It is a
greedy algorithm. It may cause starvation if shorter processes keep coming. This problem can be
solved using the concept of aging. It is practically infeasible as operating system may not know
burst time and therefore may not sort them. While it is not possible to predict execution time,
several methods can be used to estimate the execution time for a job, such as a weighted average
of previous execution times. SJF can be used in specialized environments where accurate
estimates of running time are available.
3. Round RobinScheduling
Round Robin is a CPU scheduling algorithm where each process is assigned a fixed time slot in
a cyclic way. It is simple, easy to implement, and starvation-free as all processes get fair share of
CPU. One of the most commonly used technique in CPU scheduling as a core. It is preemptive
as processes are assigned CPU only for a fixed slice of time at most. The disadvantage of it is
more overhead of contextswitching.
4. PriorityScheduling
Priority scheduling is one of the most common scheduling algorithms in batch systems. Each
process is assigned a priority. Process with the highest priority is to be executed first and so on.
Processes with the same priority are executed on first come first served basis. Priority can be
decided based on memory requirements, time requirements or any other resource requirement.
ALGORITHM
FCFS
1. Get the number ofprocesses.
2. Get the ID and service time for eachprocess.
3. Initially, waiting time of first process is zero and total time for the first process is the service time
of thatprocess.
4. Calculate the total time and processing time for the remaining processes asfollows:
a. Waiting time of a process is the total time of the previousprocess.
b. Total time of a process is calculated by adding its waiting time and servicetime.
5. Total waiting time is calculated by adding the waiting time of eachprocess.
6. Total turn around time is calculated by adding the total time of eachprocess.
7. Calculate average waiting time by dividing the total waiting time by total number ofprocesses.
8. Calculate average turn around time by dividing the total turn around time by the number of
processes.
9. Display theresult.
SJF
1. Get the number ofprocesses.
2. Get the ID and service time for eachprocess.
3. Initially the waiting time of first short process is set as 0 and total time of first short process is
taken as the service time of thatprocess.
4. Calculate the total time and waiting time of remaining processes asfollows:
a. Waiting time of a process is the total time of the previousprocess.
b. Total time of a process is calculated by adding the waiting time and service timeof
eachprocess.
5. Total waiting time is calculated by adding the waiting time of eachprocess.
6. Total turn around time is calculated by adding the total time of eachprocess.
7. Calculate average waiting time by dividing the total waiting time by total number of
processes.
8. Calculate average turn around time by dividing the total turn around time by total number of
processes.
9. Display theresult.
Round Robin
1. Get the number ofprocesses.
2. Get the process id, burst time and arrival time for each of theprocesses.
3. Create an array rem_bt[] to keep track of remaining burst time of processes. This arrayis
initially a copy of bt[] (burst timesarray).
4. Create another array wt[] to store waiting times of processes. Initialize this array as0.
5. Initialize time : t =0
6. Find waiting time for each process by traversing all the processes while all processes arenot
done. Do the following for i-th process if it is not doneyet.
a. If rem_bt[i] > quantum
i) t = t +quantum
ii) bt_rem[i] = bt_rem[i] -quantum;
Priority Scheduling
OUTPUT
FCFS
SJF
Priority Scheduling
RESULT
The CPU scheduling algorithms are simulated and output is verified.
EXPERIMENT 4
AIM
DESCRIPTION
The Banker's algorithmis a resource allocation and deadlock avoidance algorithm developed by
EdsgerDijkstra that tests for safety by simulating the allocation of predetermined maximum possible
amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions
for all other pending activities, before deciding whether allocation should be allowed to continue.
Following Data structures are used to implement the Banker‘s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.
Available :
It is a 1-d array of size ‘m’ indicating the number of available resources of eachtype.
Available[ j ] = k means there are ‘k’ instances of resource type Rj.
Max :
It is a 2-d array of size ‗nxm’ that defines the maximum demand of each process in asystem.
Max[ i, j ] = k means process Pimay request at most ‘k’ instances of resource type Rj.
Allocation :
It is a 2-d array of size ‘nxm’ that defines the number of resources of each type currently
allocated to eachprocess.
Allocation[ i, j ] = k means process Piis currently allocated ‘k’ instances of resource type Rj
Need :
It is a 2-d array of size ‘nxm’ that indicates the remaining resource need of eachprocess.
Need[i, j ] = k means process Picurrently need ‘k’ instances of resource type Rjfor its
execution.
Need [ i, j ] = Max [ i, j ] – Allocation [ i, j]
Allocationi specifies the resources currently allocated to process Pi and Needi specifies the additional
resources that process Pi may still request to complete its task.
Banker‘s algorithm consists of Safety algorithm and Resource request algorithm.
ALGORITHM
Safety Algorithm
The algorithm is for finding out whether or not a system is in a safe state and can be described as
follows:
1) Let Work and Finish be vectors of length ‗m‘ and ‗n‘respectively.
Initialize: Work =Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such thatboth
a) Finish[i] =false
b) Needi<=Work
If no such i exists goto step (4).
Resource-RequestAlgorithm
Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k instances of
resource type Rj. When a request for resources is made by process Pi, the following actions are taken:
1) If Requesti<=Needi
Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its maximum claim.
2) If Requesti<=Available
Goto step (3); otherwise, Pi must wait, since the resources are not available.
3) Assuming that the system has allocated the requested resources to process Pimodifying the stateas
follows:
Available = Available –
RequestiAllocationi = Allocationi +
RequestiNeedi = Needi – Requesti
If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti , and the od
resource-allocation state is restored.
OUTPUT
5 1 0 5
1 5 3 0
3 0 3 3
Allocated resources: 7 3 7 5
Available resources: 1 2 2 2
Process3 isexecuting.
The process is in safe state.
Available vector: 5 2 2 5
Process1 isexecuting.
The process is in safe state.
Available vector: 7 2 3 6
Process2 isexecuting.
The process is in safe state.
Available vector: 7 3 5 7
Process4 isexecuting.
The process is in safe state.
Available vector: 7 5 6 7
Process5 isexecuting.
The process is in safe state.
Available vector: 8 5 9 7
RESULT
The Banker‘s algorithm for deadlock avoidance is simulated and output is verified.
AIM
DESCRIPTION
The directory structure is the organization of files into a hierarchy of folders. In a single-level
directory system, all the files are placed in one directory. There is a root directory which has all files.
It has a simple architecture and there are no sub directories. Advantage of single level directory
system is that it is easy to find a file in the directory.
In the two-level directory system, each user has own user file directory (UFD). The system maintains
a master block that has one entry for each user. This master block contains the addresses of the
directory of the users. When a user job starts or a user logs in, the system's master file directory
(MFD) is searched. When a user refers to a particular file, only his own UFD is searched. This
effectively solves the name collision problem and isolates users from one another.
Hierarchical directory structure allows users to create their own subdirectories and to organize their
files accordingly. A tree is the most common directory structure. The tree has a root directory, and
every file in the system has a unique path name. A directory (or subdirectory) contains a set of files or
subdirectories.
ALGORITHM
a) Single leveldirectory
b) Two leveldirectory
1. Create a structure to store details of multiple directories and multiple files for each of the
directories.
2. Perform the followingoperations:
a. Create directory
i. Accept the directoryname.
ii. Increment the directorycount.
iii. Set file count as0.
iv. Update the directory informationtable.
v. Display ‗Directorycreated!‘.
b. Createfile
i. Accept the directoryname.
ii. Compare the directory name with names of existingdirectories.
iii. If match is found, then
1. Accept the filename.
2. Increment file count for thisdirectory.
3. Update corresponding file informationtable.
iv. Otherwise, print ‗Directory notfound!‘.
c. Deletefile
i. Accept the directoryname.
ii. Compare the directory name with names of existingdirectories.
iii. If match is found, then
1. Accept the name of file to bedeleted.
2. Compare the filename with names of existing files in thisdirectory.
3. If a match is found,then
a. Delete the file by updating corresponding file informationtable.
b. Decrement the file count for thisdirectory.
4. Otherwise, display ‗File notfound!‘.
iv. Otherwise, display ‗Directory notfound!‘
d. Searchfile
i. Accept the directoryname.
ii. Compare the directory name with names of existingdirectories.
iii. If match is found, then
1. Accept the name of file to bedeleted.
2. Compare the filename with names of existing files in thisdirectory.
3. If a match is found,then
a. Display ‗Filefound!‘
4. Otherwise, display ‗File notfound!‘.
iv. Otherwise, display ‗Directory notfound!‘
e. Displayfiles
i. Accept the directoryname.
ii. Compare the directory name with names of existingdirectories.
iii. If match is found, then
1. Check if directory isempty.
2. If yes,then
a. Display ‗Directoryempty!‘
3. Otherwise, display the information of files in thatdirectory.
iv. Otherwise, display ‗Directory notfound!‘.
3. EXIT
c) Hierarchical
1. Create a rootdirectory.
2. Create subdirectories and files under root directory asrequired.
3. Create subdirectories and files under each subdirectory as required resulting in a tree
structure.
4. Display the treestructure.
OUTPUT
Single level:
Two Level
Directory created
File A2 is deleted
Hierarchical
RESULT
Single level, two-level and hierarchical directory structures were simulated and the programs
executedsuccessfully.
EXPERIMENT 6
AIM
DESCRIPTION
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk
scheduling is also known as I/O scheduling.
a) FCFS
FCFS is the simplest of all the disk scheduling algorithms. In FCFS, the requests are addressed in the
order they arrive in the disk queue.
Advantages:
Every request gets a fairchance.
No indefinite postponement of anyrequest.
Disadvantages:
Does not try to optimize the seektime.
May not provide the best possibleservice.
b) SCAN
In SCAN algorithm, the disk arm moves in a particular direction and services the requests coming in
its path and after reaching the end of disk, it reverses its direction and again services the request
arriving in its path. So, this algorithm works as an elevator and hence also known as elevator
algorithm. As a result, the requests at the midrange are serviced more and those arriving behind the
disk arm will have to wait.
Advantages:
Highthroughput
Low variance of responsetime
Average responsetime
Disadvantages:
Long waiting time for requests for locations just visited by diskarm
c) C-SCAN
In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its
direction. So, it may be possible that too many requests are waiting at the other end or there may be
zero or few requests pending at the scanned area. These situations are avoided in C-SCAN algorithm
in which the disk arm instead of reversing its direction goes to the other end of the disk and starts
servicing the requests from there. So, the disk arm moves in a circular fashion and this algorithm is
also similar to SCAN algorithm and hence it is known as C-SCAN (Circular SCAN).
Advantages:
Provides more uniform wait time compared toSCAN.
ALGORITHM
FCFS:
1. Let request array represent an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‗head‘ is the position of diskhead.
2. One by one take the tracks in default order and calculate the absolute distance of the track from
thehead.
3. Add this distance to the total headmovement.
4. Currently serviced track position now becomes the new headposition.
5. Go to step 2 until all tracks in request array have not beenserviced.
SCAN:
1. Let request array represent an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‗head‘ is the position of diskhead.
2. Let direction represent whether the head is moving towards left orright.
3. In the direction in which head is moving, service all tracks one byone.
4. Calculate the absolute distance of the track from thehead.
5. Add this distance to the total headmovement.
6. Currently serviced track position now becomes the new headposition.
7. Go to step 3 until one of the ends of the disk isreached.
8. If end of the disk is reached, reverse the direction and go to step 2 until all tracks in request array
have not beenserviced.
C-SCAN:
1. Let request array represent an array storing indexes of tracks that have been requestedin
ascending order of their time of arrival. ‗head‘ is the position of diskhead.
2. Let direction represent whether the head is moving towards left orright.
3. In the direction in which head is moving, service all tracks one byone.
4. Calculate the absolute distance of the track from thehead.
5. Add this distance to the total headmovement.
6. Currently serviced track position now becomes the new headposition.
7. Go to step 3 until one of the ends of the disk isreached.
8. If end of the disk is reached, set the head position at the other end and go to step 3 until all tracks
in request array have not beenserviced.
OUTPUT
MENU
1. FCFS
2. SCAN
3. C-SCAN
4. EXIT
MENU
1. FCFS
2. SCAN
3. C-SCAN
4. EXIT
C-SCAN
*****
Total seek time : 386
MENU
1. FCFS
2. SCAN
3. C-SCAN
4. EXIT
C-SCAN
*****
Total seek time : 382
MENU
1. FCFS
2. SCAN
3. C-SCAN
4. EXIT
SCAN
*****
Total seek time : 230
MENU
1. FCFS
2. SCAN
3. C-SCAN
4. EXIT
RESULT
Thus the C program to implement the disk scheduling algorithms namely FCFS, SCAN and C-SCAN
algorithms was written and executed successfully. The obtained outputs were verified.
PART – B
EXPERIMENT 7
AIM
DESCRIPTION
Assembler is a program for converting instructions written in low-level assembly code into
relocatable machine code and generating along information for the loader. It generates instructions
by evaluating the mnemonics (symbols) in operation field and find the value of symbol and literals
to produce machine code. Now, if assembler do all this work in one scan then it is called single pass
assembler, otherwise if it does in multiple scans then called multiple pass assembler. Here,
assembler divides these tasks in twopasses:
Pass-1:
a) Define symbols and literals and remember them in symbol table and literal tablerespectively.
b) Keep track of locationcounter
c) Processpseudo-operations
ALGORITHM
Begin
read first input line;
if OPCODE = 'START' then
begin
save #[OPERAND] as starting address
initialized LOCCTR to starting address
write line to intermediate file
read next input line
end {if START}
else
initialize LOCCTR to 0
while OPCODE != 'END'
do begin
if this is not a comment line then
begin
if there is a symbol in the LABEL field then begin
search SYMTAB for LABEL
if found then
set error flag (duplicate symbol)
else
insert (LABEL, LOCCTR) into SYMTAB
end {if symbol}
search OPTAB for OPCODE
if found then
add 3 {instruction length} to LOCCTR
OUTPUT
INPUT FILES:
input.txt
COPY START 1000
LDA ALPHA
ADD ONE
SUB TWO
STA BETA
ALPHA BYTE C‘KLNCE‘
ONE RESB 2
TWO WORD 5
BETA RESW 1
END
OPTAB.txt
LDA 00
STA 0C
ADD 18
SUB 1C
OUTPUT FILES:
The length of the program is 19
SYMTAB.txt
ALPHA 100C
ONE 1011
TWO 1013
BETA 1016
output.txt
COPY START 1000
1000 LDA ALPHA
1003 ADD ONE
1006 SUB TWO
1009 STA BETA
100C ALPHA BYTE C‘KLNCE‘
1011 ONE RESB 2
1013 TWO WORD 5
1016 BETA RESW 1
1019 END
RESULT
The pass 1 of a two pass assembler was simulated successfully and the output was verified.
EXPERIMENT 8
DESCRIPTION
Pass-2 of assembler generates machine code by converting symbolic machine-opcodes into their
respective bit configuration (machine understandable form). It stores all machine-opcodes in MOT
table (op-code table) with symbolic code, their length and their bit configuration. It will also process
pseudo-ops and will store them in POT table(pseudo-op table).
Pass-2:
a) Generate object code by converting symbolic op-code into respective numeric op-
code.
b) Generate data for literals and look for values ofsymbols
ALGORITHM
Begin
read first input file {from intermediate file}
if OPCODE = 'START' then
begin
write listing line
read next input line
end {if START}
write header record to object program
initialize first text record
while OPCODE != 'END' do
begin
if this is not a comment line then
begin
search OPTAB for OPCODE
if found then
begin
if there is a symbol in OPERAND field then
begin
search SYMTAB for OPERAND
if found then
store symbol value as operand address
else
begin
store 0 as operand address
set error flag (undefined symbol)
end
end {if symbol}
else
store 0 as operand address
assemble the object code instruction
end {if opcode found}
else if OPCODE = 'BYTE' or 'WORD' then
convert constant to object code
if object code not fit into the current Text record then
begin
write Text record to object program
initialize new Text record
end
add object code to Text record
end {if not comment}
write listing line
read next input line
end {while not END}
write last Text record to object program
write End record to object program
write last listing line
end
OUTPUT
INPUT FILES:
OPTAB.txt
LDA 00
STA 0C
ADD 18
SUB 1C
SYMTAB.txt
ALPHA 100C
ONE 1011
TWO 1013
BETA 1016
input.txt
COPY START 1000
1000 LDA ALPHA
1003 ADD ONE
1006 SUB TWO
1009 STA BETA
100C ALPHA BYTE C‘KLNCE‘
1011 ONE RESB 2
1013 TWO WORD 5
1016 BETA RESW 1
1019 END
OUTPUT FILES:
RESULT
Pass 2 of a two pass assembler was simulated successfully and the output was verified.
EXPERIMENT 9
DESCRIPTION
A single pass assembler scans the program only once and creates the equivalent binary program. The
assembler substitute all of the symbolic instruction with machine code in one pass. One-pass
assemblers are used when it is necessary or desirable to avoid a second pass over the source program
the external storage for the intermediate file between two passes is slow or is inconvenient to use.
Main problem: forward references to both data and instructions.
One simple way to eliminate this problem: require that all areas be defined before they are referenced.
It is possible, although inconvenient, to do so for data items. Forward jump to instruction items cannot
be easily eliminated.
ALGORITHM
OUTPUT
INPUT FILES:
input1.txt
COPY START 1000
- LDA ALPHA
- STA BETA
ALPHA RESW 1
BETA RESW 1
- END -
optab1.txt
LDA 00
STA 0C
OUTPUT FILES:
SYMTAB.txt
ALPHA 1006
BETA 1009
RESULT
A single pass assembler was implemented successfully and the output was verified.
EXPERIMENT 10
IMPLEMENTATION OF A MACRO PROCESSOR
AIM
DESCRIPTION
In assembly language programming it is often that some set or block of statements get repeated every
now. In this context the programmer uses the concept of macro instructions (often called as macro)
where a single line abbreviation is used for a set of line. For every occurrence of that single line the
whole block of statements gets expanded in the main source code. This gives a high level feature to
assembly language that makes it more convenient for the user to write code easily.
A macro instruction (macro) is simply a notational convenience for the programmer. It allows the
programmer to write shorthand version of a program (module programming). A macro represents a
commonly used group of statements in the source program. The macro processor replaces each macro
instruction with the corresponding group of source statements. This operation is called ―expanding
the macro‖ Using macros allows a programmer to write a shorthand version of a program. For
example, before calling a subroutine, the contents of all registers may need to be stored. This routine
work can be done using amacro.
(2) NAMTAB - Stores macro names, which serves an index to DEFTAB contain
pointers to the beginning and end of the definition.
(3) ARGTAB - Used during the expansion of macro invocations. When a macro
invocation statement is encountered, the arguments are stored in this table according
to their position in the argument list.
ALGORITHM
OUTPUT
INPUT FILES:
Input.txt
SAMPLE START 1000
EX1 MACRO &A, &B
- LDA &A
- STA &B
- MEND -
- EX1 N1, N2
N1 RESW 1
N2 RESW 1
- END -
OUPUT FILES:
DEFTAB.txt
1 //EX1 &A, &B
2 LDA ?1
3 STA ?2
4 MEND
NAMTAB.txt
EX1 1 4
Output.txt
SAMPLE START 1000
- LDA N1
- STA N2
N1 RESW 1
N2 RESW 1
- END -
RESULT
A macro processor was implemented successfully and the output was verified.
EXPERIMENT 11
IMPLEMENTATION OF A SYMBOL TABLE
AIM
DESCRIPTION
A Symbol table is a data structure used by the compiler, where each identifier in program‘s source
code is stored along with information associated with it relating to its declaration. It stores identifier
as well as it‘s associated attributes like scope, type, line-number of occurrence, etc.
ALGORITHM
OUTPUT
1. CREATE
2. INSERT
3. MODIFY
4. SEARCH
5. DISPLAY
6. EXIT:1
The current value of the variable c is 45. Enter the new variable and its value 44 The table after
modification is:
Variable value
a 23
c 44
b 34
RESULT
The symbol table was implemented successfully and the output was verified.
AIM
DESCRIPTION
ALGORITHM
1. Create m number of nodes in a symbol table, where m is the size of the table.
2. Perform operations, insert, search and display.
3. Add entries to the symbol table by converting the string to be inserted as integers and applying
the hash function - key mod m.
4. Entries may be added to the table while it is created itself.
5. Append new contents to the symbol table using hash function with the constraint that there is no
duplication of entries, using ―insert‖option.
6. Modify existing content of the table using modify option.
7. Use display option to display the contents of the table.
OUTPUT
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 2000 software
10 1000 system
RESULT
The symbol table with suitable hashing was implemented in C language, and the output has been
verified.
ADDITIONAL EXPERIMENTS
DESCRIPTION
In an operating system that uses paging for memory management, a page replacement algorithm is
needed to decide which page needs to be replaced when new page comes in.
Page Fault – A page fault happens when a running program accesses a memory page that is mapped
into the virtual address space, but not loaded in physical memory.
Since actual physical memory is much smaller than virtual memory, page faults happen. In case of
page fault, Operating System might have to replace one of the existing pages with the newly needed
page. Different page replacement algorithms suggest different ways to decide which page to replace.
The target for all algorithms is to reduce the number of page faults.
Page Replacement Algorithms
ALGORITHM
1. FIFO
a) Remove the first page from the queue as it was the first to be entered in the
memory.
b) Replace the first page in the queue with the current page in thestring.
c) Store current page in thequeue.
d) Increment page faults.
2. Return the number of pagefaults.
2. LRU
Let capacity be the number of pages that memory can hold. Let set be the current set of pages in
memory.
1. Start traversing thepages.
i) If set holds less pages than capacity
a) Insert page into the set one by one until the size of set reaches capacity or all pagerequests
areprocessed.
b) Simultaneously maintain the recent occurred index of each page in a map calledindexes.
c) Increment pagefault
ii) Else
If current page is present in set, do nothing.
Else
a) Find the page in the set that was least recently used using index array.Replace
the page with minimum index with current page.
b) Increment page faults.
c) Update index of currentpage.
2. Return number of pagefaults.
OUTPUT
Enterdata
Enter no of frames:3
For 7 : 7
For 2 : 7 2
For 3 : 7 23
For 1 : 2 31
For 2 :No page fault
For 5 : 3 15
For 3 :No page fault
For 4 : 1 54
For 6 : 5 46
For 7 : 4 67
For 7 :No page fault
For 1 : 6 71
For 0 : 7 10
For 5 : 1 05
For 4 : 0 54
For 6 : 5 46
For 2 : 4 62
For 3 : 6 23
For 0 : 2 30
For 1 : 3 01
Total no of page faults:17
Enter data
Enter no of frames:3
For 1 : 1
For 2 : 1 2
For 3 : 1 23
For 2 :No page fault!
For 4 : 4 23
For 1 : 4 21
For 3 : 4 31
For 2 : 2 31
For 4 : 2 34
For 1 : 2 14
Total no of page faults:9
RESULT
The program to implement FIFO and LRU page replacement algorithms was executed successfully
and the output was verified.
5. What ismutex?
Mutex is a locking mechanism which allows only one process to access the resource at a time. It
stands for mutual exclusion and ensures that only one process can enter the critical section at atime.
8. What isdeadlock?
A deadlock is a situation where two or more processes or threads sharing the same resources are
effectively preventing each other from accessing the resources. Thus, none of the process can continue
executing leading to deadlock.
Mutual Exclusion: At least one resource is held in a non-sharable mode that is only one process at a
time can use the resource. If another process requests that resource, the requesting process has to wait
till it is released.
Hold and Wait: There must exist a process that is holding at least one resource and is waiting to
acquire additional resources that are currently being held by other processes.
No Preemption: Resources cannot be preempted; that is, a resource can only be released after the
process has completed its task.
Circular Wait: There must exist a set {p0, p1,…..pn} of waiting processes such that p0 is waiting for a
resource which is held by p1, p1 is waiting for a resource which is held by p2,…, pn-1 is waiting for a
resource which is held by pn and pn is waiting for a resource which is held by p0.
PART - B
1. Define SystemSoftware.
System software is a type of computer program that is designed to run a computer's hardware and
application programs.