OS Labs
OS Labs
Student Name
Student ID
List of Experiments/Content
6 Understanding Process.
Process creation in Linux using System Calls.
• Fork() method.
• Zombie and Orphan processes.
7 Understanding Threads.
• Threads vs. Processes.
• Multithreaded Programming using Python and C.
8 Simulation of FCFS CPU scheduling algorithm.
Simulation of SJF CPU scheduling algorithm.
Objective:
Executing some of the most frequently used Linux commands
To understand and use basic utilities/Commands of UNIX/LINUX.
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective:
To understand and use basic utilities/Commands of UNIX/LINUX.
THEORY
A Linux/UNIX command is any executable file. This means that any executable file added to the system becomes a
new command on the system. A Linux command is a type of file that is designed to be run, as opposed to files
containing data or configuration information.
Types of Commands
3. cd
Description: Changes the current directory to any accessible directory on the system.
Syntax: For instance, to change from /home/user1 to a subdirectory of user1 wordfiles use the following:
$ cd wordfiles
4. ls
Description: Displays the listing of files and directories. If no file or directory is specified, then the current directory’s
contents are displayed. By default, the contents are sorted alphabetically.
Syntax: To view the contents of user1 home directory use this:
$ ls
To list the contents of any directory on the system use:
$ ls /usr
Options: 2
-l Lists all the files, directories and their mode, Number of links, owner of the file,
file size, Modified date and time and filename.
-t Lists in order of last modification time.
-a Lists all entries including hidden files.
-d Lists directory files instead of contents.
-p Puts slash at the end of each directory.
-u List in order of last access time.
-i Display inode information.
5. grep
Description: Searches files for lines matching a specific pattern and displays the lines
Grep stands for Global Regular Expression Parser. What grep does, essentially, is find and display lines that contain
a pattern that you specify. There are two basic ways to use grep.
The first use of grep is to filter the output of other commands. The general syntax is <command> | grep
<pattern>. For instance, if we wanted to see every actively running process on the system, we would type ps -a | grep
R. In this application, grep passes on only those lines that contain the pattern (in this case, the single letter) R. Note
that if someone were running a program called Resting, it would show up even if its status were S for sleeping,
because grep would match the R in Resting. An easy way around this problem is to type grep " R ", which explicitly
tells grep to search for an R with a space on each side. You must use quotes whenever you search for a pattern that
contains one or more blank spaces. The second use of grep is to search for lines that contain a specified pattern in a
specified file. The syntax here is grep <pattern> <filename>. Be careful. It's easy to specify the filename first and the
pattern second by mistake! Again, you should be as specific as you can with the pattern to be matched, in order to
avoid "false" matches.
Syntax: Assuming that you are in your home directory, then the following command searches for the word “hello” in
each file in your home directory and produces the results as follows: $ grep hello*
6. CLEAR
It is used to clear the screen.
Syntax: $clear
7. Date:
This command is used to display the current data and time.
Syntax: $date
Options: - a = Abbrevated weekday.
A = Full weekday.
b = Abbrevated month.
B = Full month.
c = Current day and time.
C = Display the century as a decimal number.
Y = Display the full year.
Z = Time zone .
8. echo:
echo command prints the given input string to standard output.
SYNTAX: echo [options...] [string]
1. Write Linux command to List all files (and subdirectories) in the home directory.
2. Write Linux command to Display the content of /etc/passwd file with as many lines at a time as the last digit of
your roll number.
4. Use grep to search for the pattern: ‘The’ in a text file in the home directory
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
Managing Files & Directories in Linux
To understand Linux directory structure, and file manipulation.
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: To understand Linux directory structure, and file manipulation.
THEORY
Directories
Linux, like many other computer systems, organizes files in directories. You can think of directories as file folders and
their contents as the files. However, there is one absolutely crucial difference between the Linux file system and an
office filing system. In the office, file folders usually don't contain other file folders. In Linux, file folders can contain other
file folders. In fact, there is no Linux "filing cabinet"— just a huge file folder that holds some files and other folders.
These folders contain files and possibly other folders in turn, and so on.
Naming Directories
Directories are named just like files, and they can contain upper- and lowercase letters, numbers, and characters such
as -, ., and _. The slash (/) character is used to show files or directories within other directories.
/ This is the root directory. It holds the actual Linux program, as well as subdirectories. Do not clutter this directory with
your files! This is the root directory. It holds the actual Linux program, as well as subdirectories. Do not clutter this
directory with your files!
/home
This directory holds users' home directories. In other UNIX systems, this can be the /usr or /u directory.
/bin
This directory holds many of the basic Linux programs. bin stands for binaries, files that are executable and that hold
text only computers could understand.
/usr
This directory holds many other user-oriented directories. Some of the most important are described in the following
sections. Other directories found in /usr include
sbin
This directory holds system files that are usually run automatically by the Linux system.
etc
This directory and its subdirectories hold many of the Linux configuration files. These files are usually text, and they can
be edited to change the system's configuration (if you know what you're doing!).
Creating Files
Linux has many ways to create and delete files. In fact, some of the ways are so easy to perform that you have to be
careful not to accidentally overwrite or erase files!
Return to your home directory by typing cd. Make sure you're in your /home/<user> directory by running pwd. A file can
be created by typing ls -l /bin > test. Remember, the > symbol means "redirect all output to the following filename." Note
that the file test didn't exist before you typed this command. When you redirect to a file, Linux automatically creates the
file if it doesn't already exist.
What if you want to type text into a file, rather than some command's output? The quick way is to use the command cat.
and if you want to copy this file to /tmp but give the new file a different name, enter
$ cp thisfile /tmp/newfilename
Also, to avoid overwriting a file accidentally use the –i flag of the cp command which forces the system to confirm any file
it will overwrite when copying. Then a prompt like the following appears:
$ cp fileone /tmp
$ cp filetwo /tmp
$ cp filethree /tmp
Creating a Directory
To create a new directory, use the mkdir command. The syntax is mkdir <name>, where <name> is replaced by
whatever you want the directory to be called. This creates a subdirectory with the specified name in your current
directory:
$ ls anotherfile newfile thirdfile
$ mkdir newdir
Moving Directories
To move a directory, use the mv command. The syntax is mv <directory> <destination>. In the following
example, you would move the newdir subdirectory found in your current directory to the /tmp directory:
$ mv newdir /tmp
$ cd /tmp
$ ls
/newdir
Removing Files
To remove (or delete) a file, use the rm command found at /bin/rm. (rm is a very terse spelling of remove). The syntax is
rm <filename>. For instance:
$ rm myfile removes the file myfile from your current directory.
$ rm * removes all files from your current directory. (Be careful when using wildcards!)
$ rm /tmp/*files removes all files ending in “files” from the /tmp directory.
Note: As soon as a file is removed, it is gone! Always think about what you're doing before you remove a file.
Removing Directories
The command normally used to remove (delete) directories is rmdir. The syntax is rmdir <directory>.
Before you can remove a directory, it must be empty (the directory can't hold any files or subdirectories).
Otherwise, you see
rmdir: <directory>: Directory not empty
When you create a file, you are that file's owner. Being the file's owner gives you the privilege of changing the file's
permissions or ownership. Of course, once you change the ownership to another user, you can't change the ownership
or permissions anymore!
File owners are set up by the system during installation. Linux system files are owned by IDs such as root, uucp, and bin.
Do not change the ownership of these files.
To make any further changes to the file myfile, or to chown it back to fido, you must use su or log in as root.
File Permissions
Linux lets you specify read, write, and execute permissions for each of the following: the owner, the group, and "others"
(everyone else).
read permission enables you to look at the file. In the case of a directory, it lets you list the directory's contents using ls.
write permission enables you to modify (or delete!) the file. In the case of a directory, you must have write permission in
order to create, move, or delete files in that directory.
execute permission enables you to execute the file by typing its name. With directories, execute permission enables you
to cd into them.
The first character of the permissions is -, which indicates that it's an ordinary file. If this were a directory, the first
character would be d.
The next nine characters are broken into three groups of three, giving permissions for owner, group, and other. Each
triplet gives read, write, and execute permissions, always in that order. Permission to read is signified by an r in the first
position, permission to write is shown by a w in the second position, and permission to execute is shown by an x in the
third position. If the particular permission is absent, its space is filled by -.
In the case of myfile, the owner has rw-, which means read
and write permissions. This file can't be executed by typing myfile at the Linux prompt.
The group permissions are r—, which means that members of the group "users" (by default, all ordinary users on the
system) can read the file but not change it or execute it. Likewise, the permissions for all others are r—: read-only.
Read permission 4
write permission 2
execute permission 1
File permissions are often given as a three-digit number—for instance, 751. It's important to understand how the
numbering system works, because these numbers are used to change a file's permissions. Also, error messages that
involve permissions use these numbers.
The first digit codes permissions for the owner, the second digit codes permissions for the group, and the third digit codes
permissions for other (everyone else).
The individual digits are encoded by summing up all the "allowed" permissions for that particular user as follows:
Therefore, a file permission of 751 means that the owner has read, write, and execute permission (4+2+1=7), the group
has read and execute permission (4+1=5), and others have execute permission (1). If you play with the numbers, you
quickly see that the permission digits can range between 0 and 7, and that for each digit in that range there's only one
possible combination of read, write, and execute permissions.
You can also use letter codes to change the existing permissions. To specify which of the permissions to change, type u
(user), g (group), o (other), or a (all). This is followed by a + to add permissions or a - to remove them. This in turn is
followed by the permissions to be added or removed. For example, to add execute permissions for the group and others,
you would type:
Lab Exercise(s):
__________________________________________________________________________________________
2. What is the difference between the permissions 777 and 775 of the chmod command?
________________________________________________________________________________________
________________________________________________________________________________________
3. Write a command to remove all files with name containing text ‘the’ ?
__________________________________________________________________________________________
__________________________________________________________________________________________
Objective:
Programming using Shell Scripting.
To Create, Execute and use basic Shell Scripting.
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: To Create, Execute and using basic Shell Script.
THEORY
When a user enters commands from the command line, he is entering them one at a time and getting a response
from the system. From time to time, it is required to execute more than one command, one after the other, and get
the final result. This can be done with a shell program or a shell script. A shell program is a series of Linux
commands and utilities that have been put into a file by using a text editor. When a shell program is executed, the
commands are interpreted and executed by Linux one after the other.
A shell program is like any other programming language and it has its own syntax. It allows the user to define
variables, assign various values, and so on.
At the simplest level, shell programs are just files that contain one or more shell or Linux commands. These
programs can be used to simplify repetitive tasks, to replace two or more commands that are always executed
together with a single command, to automate the installation of other programs, and to write simple interactive
applications.
Shell script is just a simple text file with “.sh” extension, having executable permission. Use chmod command to
change permission.
Using Variables
Linux shell programming is full-fledged programming language and, as such, supports various types of variables.
Variables have three major types:
•Environment variables are part of the system environment, and they do not need to be defined. They can be used in
shell programs. Some of them such as PATH can also be modified within the shell program.
•Built-in variables are provided by the system. Unlike environment variable they cannot be modified.
•User variables are defined by the user when he writes the script. They can be used and modified at will within the
shell program.
Environment Variables
You can use any one of the following commands to display the environment variables and their values.
printenv
Built-in Variables
These are special variables that Linux provides that can be used to make decisions in a program. Their values cannot
be modified.
Several other built-in shell variables are important to know about when you are doing a lot of shell programming. The
following table lists these variables and gives a brief description of what each is used for.
Use
Variable
$# Stores the number of command-line arguments that were passed to the shell
program.
$? Stores the exit value of the last command that was executed.
$0 Stores the first word of the entered command (the name of the shell program).
$* Stores all the arguments that were entered on the command line ($1 $2 ...).
"$@" Stores all the arguments that were entered on the command line, individually quoted
("$1" "$2" ...).
o $BASH
o $USER
o $BASH_VERSION
o $PATH
o $TERM etc.
User Variables
count=5
With tcsh you would have to enter the following command to achieve the same results:
set count = 5
With the bash and pdksh syntax for setting a variable, you must make sure that there are no spaces on either side of
the equal sign. With tcsh, it doesn't matter if there are spaces or not.
Notice that you do not have to declare the variable as you would if you were programming in C or Pascal. This is
because the shell language is a non-typed interpretive language. This means that you can use the same variable to
store character strings that you use to store integers. You would store a character string into a variable in the same
way that you stored the integer into a variable. For example:
To access the value stored in a variable precede the variable name with a dollar sign ($). If you wanted to print the
value stored in the count variable to the screen, you would do so by entering the following command:
echo $count
If you omitted the $ from the preceding command, the echo command would display the word count onscreen.
Shell Arithmetic Operators
The following arithmetic operators are supported by Bourne Shell.
+ (Addition) Adds values on either side of the operator `expr $a + $b` will give 30
- (Subtraction) Subtracts right hand operand from left hand operand `expr $a - $b` will give -10
* Multiplies values on either side of the operator `expr $a \* $b` will give 200
(Multiplication)
/ (Division) Divides left hand operand by right hand operand `expr $b / $a` will give 2
% (Modulus) Divides left hand operand by right hand operand and `expr $b % $a` will give 0
returns remainder
== (Equality) Compares two numbers, if both are same then returns [ $a == $b ] would return false.
true.
!= (Not Compares two numbers, if both are different then returns [ $a != $b ] would return true.
Equality) true.
There are many options to perform arithmetic operations on the bash shell. Some of the options are given below that
we can adopt to perform arithmetic operations:
Double Parentheses
Double parentheses is the easiest mechanism to perform basic arithmetic operations in the Bash shell. We can use
this method by using double brackets with or without a leading $.
Example#1
Sum=$((10+3))
echo "Sum = $Sum"
Example#2
x=8
y=2
echo "x=8, y=2"
echo "Addition of x & y"
echo $(( $x + $y ))
Example#3
a=10
b=3
# there must be spaces before/after the operator
sum=`expr $a + $b`
echo $sum
Lab Exercise(s):
1. Write a shell program that takes one parameter (your name) and displays it on the screen.
2. Write a shell program that takes a number parameters equal to the last digit of your roll number and displays
the values of the built-in variables such as $#, $0, and $* on the screen.
3. Write a Shell script to perform addition on numbers provided by command line parameters.
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
Test Commands and Conditional Statements.
Focusing on the usage of the test commands and conditional statements
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Focusing on the usage of the test commands and conditional statements.
THEORY
In shell scripting a command called test is used to evaluate conditional expressions. Test command is used to
evaluate a condition that is used in a conditional statement or to evaluate the entrance or exit criteria for an iteration
statement.
Several built-in operators can be used with the test command. These operators can be classified into four groups:
integer operators, string operators, file operators, and logical operators.
The shell integer operators perform similar functions to the string operators except that they act on integer
arguments. The following table lists the test command's integer operators.
Operator
Meaning
Int1 -eq int2 Returns True if int1 is equal to int2.
Int1 -ge int2 Returns True if int1 is greater than or equal to int2.
Int1 -gt int2 Returns True if int1 is greater than int2.
Int1 -le int2 Returns True if int1 is less than or equal to int2.
Int1 -lt int2 Returns True if int1 is less than int2.
Int1 -ne int2 Returns True if int1 is not equal to int2.
The string operators are used to evaluate string expressions. The table below lists the string operators that are
supported by the three shell programming languages.
Operator Meaning
-d filename Returns True if file, filename is a directory.
-f filename Returns True if file, filename is an ordinary file.
-r filename Returns True if file, filename can be read by the process.
-s filename Returns True if file, filename has a nonzero length.
-w filename Returns True if file, filename can be written by the process.
-x filename Returns True if file, filename is executable.
The test command's logical operators are used to combine two or more of the integer, string, or file operators or to
negate a single integer, string, or file operator. The table below lists the test command's logical operators.
Command Meaning
! expr Returns True if expr is not true.
expr1 -a expr2 Returns True if expr1 and expr2 are true.
expr1 -o expr2 Returns True if expr1 or expr2 is true.
Conditional Statements
The bash, pdksh, and tcsh each have two forms of conditional statements. These are the if statement and the
case statement. These statements are used to execute different parts of your shell program depending on
whether certain conditions are true. As with most statements, the syntax for these statements is slightly different
between the different shells.
The if Statement
All three shells support nested if...then...else statements. These statements provide you with a way of performing
complicated conditional tests in your shell programs. The syntax of the if statement is the same for bash and
pdksh and is shown here:
if [ expression ] then commands
elif [ expression2 ] then
commands else
commands fi
The elif and else clauses are both optional parts of the if statement.
Example:
This statement checks to see if there is a file in the current directory:
if [ -f “sar.txt” ] then echo "There is a .txt file in the current directory." else echo "Could not find
the .txt file."
fi
The case Statement
The case statement enables you to compare a pattern with several other patterns and execute a block of code if
a match is found. Once again, the syntax for the case statement is identical for bash and pdksh and different for
tcsh. The syntax for bash and pdksh is the following:
Example:
The following code is an example of a bash or pdksh case statement. This code checks to see if the first
command-line option was -i or -e. If it was -i, the program counts the number of lines in the file specified by the
second command-line option that begins with the letter i. If the first option was -e, the program counts the number
of lines in the file specified by the second command-line option that begins with the letter e. If the first command-
line option was not -i or -e, the program prints a brief error message to the screen.
case $1 in
-i) count='grep ^i $2 | wc
-l' echo "The number of lines in $2 that start with an i is $count"
;;
-e) count='grep ^e $2 | wc -l' echo "The number of lines in $2 that start with an e is $count"
;;
* ) echo "That option is not recognized"
;;
esac
Lab Exercise(s):
1. Write a script that takes two strings as input compares them and depending upon the results of the
comparison prints the results.
The user may provide input to the Bash script using:
read var
2. Write a script that takes a number (parameter) from 1-3 as input and uses case to display the name of
corresponding month.
3. Write a script that takes command-line argument for age and marks, and decide whether student is eligible
for admission or not.
Eligibility Criteria:
Age should be lesser than 18 and marks should be greater than 700
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
Loops and Functions
Focusing on the usage of iteration statements and functions in shell programming.
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Focusing on the usage of iteration statements and functions in shell programming
THEORY
The shell languages also provide several iteration or looping statements. The most commonly used of these is the
for statement.
The ‘for’ statement executes the commands that are contained within it a specified number of times. The ‘bash’ and
‘pdksh’ have two variations of the for statement.
The first form of the for statement that bash and pdksh support has the following syntax:
In this form, the for statement executes once for each item in the list. This list can be a variable that contains several
words separated by spaces, or it can be a list of values that is typed directly into the statement. Each time through
the loop, the variable var1 is assigned the current item in the list, until the last one is reached.
The second form of for statement has the following syntax:
In this form, the for statement executes once for each item in the variable var1. When this syntax of the for
statement is used, the shell program assumes that the var1 variable contains all the positional parameters that were
passed in to the shell program on the command line.
Typically, this form of for statement is the equivalent of writing the following for statement:
The equivalent of the for statement in tcsh is called the foreach statement. It behaves in the same manner as the
bash and pdksh for statement. The syntax of the foreach statement is the following:
Another iteration statement offered by the shell programming language is the while statement. This statement
causes a block of code to be executed while a provided conditional expression is true. The syntax for the while
statement in bash and pdksh is the following:
bash, pdksh, and tcsh all support a command called shift. The shift command moves the current values stored in the
positional parameters to the left one position. For example, if the values of the current positional parameters are
$1 = -r $2 = file1 $3 = file2
and you executed the shift command
shift
the resulting positional parameters would be as follows:
$1 = file1 $2 = file2
The break Statement
The break statement can be used to terminate an iteration loop, such as a for, until or repeat command.
Functions
The shell languages enable you to define your own functions. These functions behave in much the same way as
functions you define in C or other programming languages. The main advantage of using functions as opposed to
writing all of your shell code in line is for organizational purposes. Code written using functions tends to be much
easier to read and maintain and also tends to be smaller, because you can group common code into functions instead
of putting it everywhere it is needed. The syntax for creating a function in bash and pdksh is the following:
fname ()
{ shell commands
}
Once you have defined your function, you can invoke it by entering the following command:
Lab Exercise(s):
1. Write a script that creates a backup version of each file in your home directory to a subdirectory called backup
using for statement. If the operation fails an error message is to be displayed.
2. Write a script that calculates the average of all even numbers less than or equal to your roll number and prints
the result.
3. Write a function that displays the name of the week days starting from Sunday if the user passes a day
number. If a number provided is not between 1 and 7 an error message is displayed.
4. Write scripts that displays the parameters passed along with the parameter number using while and until
statements.
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
Processes in Operating Systems
Understanding Process, Process Creation, Zombie and Orphan Processes
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Understanding Process, Process Creation, Zombie and Orphan Processes.
THEORY
Everything that runs on a Linux system is a process—every user task, every system daemon— is a process.
Knowing how to manage the processes running on your Linux system is an important (indeed even critical) aspect
of system administration.
• A program in execution.
• An instance of Program running on a computer.
• The entity that can be assigned to and executed on a processor.
• A unit of activity with an associated set of system resources.
Using ps command
The easiest method of finding out what processes are running on your system is to use the ps (process status)
command. The ps command has a number of options and arguments, although most system administrators use
only a couple of common command-line formats. We can start by looking at the basic usage of the ps command,
and then examine some of the useful options. The ps command is available to all system users, as well as root,
although the output changes a little depending on whether you are logged in as root when you issue the command.
For example, you might see the following output when you issue the command:
$ ps
PID TTY STAT TIME COMMAND
41 v01 S 0:00 -bash
134 v01 R 0:00 ps
To obtain much more complete information about the processes currently on the system.
ps -aux
-To see currently running processes and other information like memory and CPU usage with real time updates. top
The -e option generates a list of information about every process currently running.
The -f option generates a listing that contains fewer items of information for each process than the -l option. - ps -ef |
less
Note: For more understanding and to know about more options for ps command go to man page.
“man ps”
Process Creation
A “parent process" is a process that has created one or more child processes. In UNIX, every process except process
0 is created when another process executes the fork system call. The process that invoked fork is the parent process
and the newly created process is the “child process". Every process except process 0) has one parent process, but
can have many child processes. The kernel identifies each process by its process identifier (PID). Process 0 is a
special process that is created when the system boots. Process 1, known as init, is the ancestor of every other
process in the system.
When a child process terminates execution, either by calling the exit system call, causing a fatal execution error, or
receiving a terminating signal, an exit status is returned to the operating system. A parent will typically retrieve its
child's exit status by calling the wait system call. However, if a parent does not do so, the child process becomes a
“zombie process." Following table illustrates various process system calls.
os.fork()
print("I am process:")
Program2:
# importing os module import os
os.fork()
os.fork()
os.fork()
print("I am process:")
Wait() Method Of Python Os Module
The wait() method of os module in Python enables a parent process to synchronize with the child process. i.e, to wait
till the child process exits and then proceed.
Example Program:
# import the os module of Python
import os
# Create a child process retVal = os.fork()
if retVal is 0:
print("Child process Running…………”)
print("Child process Running…………”)
print("Child process Running…………”)
print("Child process %d exiting"%(os.getpid()))
else:
# Parent process waiting
childProcExitInfo = os.wait()
print("Child process %d exited"%(childProcExitInfo[0]))
print("Now Parent process Running….")
Zombie Processes
On Unix and Unix-like computer operating systems, a \zombie process" or defunct process is a process that has
completed execution but still has an entry in the process table. This entry is still needed to allow the parent process
to read its child's exit status. The term zombie process derives from the common definition of zombie an undead
person. When a process ends, all of the memory and resources associated with it are deallocated so they can be
used by other processes. However, the process's entry in the process table remains. The parent can read the child's
exit status by executing the wait system call, whereupon the zombie is removed.
To observe the creation of Zombie Process, Python sleep() call will be used to simulate a delay
in the parent process.
Example Program:
# Create a child process import os import time retval=os.fork()
if retval == 0 :
Zombie or defunct process can be seen in another instance of the terminal by using ps -ef, just
after running the above program.
Note: The Process with status as Z or marked as <defunct> i.e., de-functioning is ZOMBIE Process.
Orphan Processes
An “orphan process" is a computer process whose parent process has finished or terminated, though it remains
running itself. In a Unix-like operating system any orphaned process will be immediately adopted by the special init
system process. This operation is called reparenting and occurs automatically. Even though technically the process
has the \init" process as its parent, it is still called an orphan process since the process that originally created it no
longer exists. A process can be orphaned unintentionally, such as when the parent process terminates or crashes.
Example Program:
import os import time
retval=os.fork()
if retval == 0 :
time.sleep(60)
To observe the creation of Orphan Process, Python sleep() call will be used to simulate a delay in the child process.
Parent id of this orphan process i.e 1 , can be seen in another instance of the terminal by using ps -ef, just after
running the above program.
Lab Exercise(s):
1. Using either a Linux system, write a program that forks a child process that ultimately becomes a zombie process.
This zombie process must remain in the system for at least 10 seconds.
2. Write a program that creates a child process which further creates its two child processes. Store the process id of
each process in an array called Created Processes. Also display the process id of the terminated child to
understand the hierarchy of termination of each child process.
3. Write a program in which a parent process will initialize an array, and child process will sort this array. Use wait()
and sleep() methods to achieve the synchronization such that parent process should run first.
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
Threads in Operating Systems
Understanding Threads, Threads Creation, Multithreaded Programming using Python
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Understanding Threads, Threads Creation, Multithreaded Programming using Python & C.
THEORY
Thread is the smallest unit of processing. It is scheduled by an OS. In general, it is contained in a process. So, multiple
threads can exist within the same process. It shares the resources with the process. Each thread belongs to exactly
one process and no thread can exist outside a process.
A thread uses the same address space of a process. A process can have multiple threads. A key difference between
processes and threads is that multiple threads share parts of their state. Typically, multiple threads can read from and
write to the same memory (no process can directly access the memory of another process). However, each thread still
has its own stack of activation records and its own copy of CPU registers, including the stack pointer and the program
counter, which together describe the state of the thread's execution. A thread is a particular execution path of a process.
When one thread modifies a process resource, the change is immediately visible to sibling threads. Processes are
independent while thread is within a process. Processes have separate address spaces while threads share their
address spaces. Multithreading has some advantages over multiple processes. Threads require less overhead to
manage than processes, and intra-process thread communication is less expensive than inter-process communication.
Types of Threads:
User Level thread (ULT) –
User threads are supported above the kernel, without kernel support. These are the threads that application
programmers would put into their programs.
Top Command
The top command can show a real-time view of individual threads. To enable thread views in the top output, invoke
top with "-H" option. This will list all Linux threads. You can also toggle on or off thread view mode while top is
running, by pressing 'H' key.
*Command-line tools such as ps or top, which display process-level information by default,
can be instructed in many ways to display thread-level information.
THREAD CREATION in Python- (Multithreaded Programming):
There are two modules which support the usage of threads in Python3 − _thread , Threading
The Threading Module
The newer threading module included with Python 2.4 provides much more powerful, high-level
support for threads than the thread module
the threading module has the Thread class that implements threading. The methods provided by the Thread class are
as follows –
• run() − The run() method is the entry point for a thread.
• start() − The start() method starts a thread by calling the run method.
• join([time]) − The join() waits for threads to terminate.
• isAlive() − The isAlive() method checks whether a thread is still executing.
• getName() − The getName() method returns the name of a thread.
• setName() − The setName() method sets the name of a thread.
To implement a new thread using the threading module, you have to do the following − Define a new subclass
of the Thread class.
Override the __init__(self [,args]) method to add additional arguments.
Then, override the run(self [,args]) method to implement what the thread should do when started. Once you have
created the new Thread subclass, you can create an instance of it and then start a new thread by invoking the
start(), which in turn calls the run() method.
Example Program:
Limitations and difficulty in using Python for Multithreading
GIL (global interpreter lock), the mechanism used by the CPython interpreter to assure that only one thread
executes Python bytecode at a time. Due to the GIL only one thread can be executed at a time. Therefore, the
above code is concurrent but not parallel. Multi-Threading in python can be considered as an appropriate model
only if you want to run multiple I/O-bound tasks simultaneously.
Using the threading module in Python or any other interpreted language with a GIL can actually result in reduced
performance. If your code is performing a CPU bound task, using the threading module will result in a slower
execution time. For CPU bound tasks and truly parallel execution, we can use the multiprocessing module
instead of multithreading in Python.
Thread Creation in C
Normally when a program starts up and becomes a process, it starts with a default thread.
So, we can say that every process has at least one thread of control.
C does not contain any built-in support for multithreaded applications, so POSIX (pthreads) can be used to write
multi-threaded C program. POSIX Threads, or Pthreads provides API which are available on many Unix-like
POSIX systems such as FreeBSD, NetBSD, GNU/Linux, Mac OS X and Solaris.
#include <pthread.h>
pthread_create (thread, attr, start_routine, arg)
Parameter Description
thread An opaque, unique identifier (pthread_t type address) for the new thread returned by the subroutine. So the
first argument will hold the thread ID of the newly created thread.
attr An opaque attribute object that may be used to set thread attributes. You can specify a thread attributes
object, or NULL for the default values.
start_routine The third argument is a function pointer. The C routine that the thread will execute once it is created.
This is something to keep in mind that each thread starts with a function and that functions address is
passed here as the third argument so that the kernel knows which function to start the thread f rom.
arg A single argument that may be passed to start_routine. As the function (whose address is passed in the
third argument above) may accept some arguments also so we can pass these arguments in form of a
pointer to a void type. Now, why a void type was chosen? This was because if a function accepts more
than one argument then this pointer could be a pointer to a structure that may contain these arguments.
Example C program:
#include <stdio.h>
#include <pthread.h> #include <stdlib.h>
void *print_message_function( void *ptr ); int main()
{
int status;
char *msg1 = "Thread 1";
char *msg2 = "Thread 2";
pthread_t tid1,tid2;
pthread_create(&tid1,NULL,myfunc,(void*)msg1);
pthread_create(&tid2,NULL,myfunc,(void*)msg2);
pthread_join(tid1,NULL); pthread_join(tid2,NULL);
return 0;
}
void *myfunc ( void *ptr )
{
char *message; message = (char *) ptr; for
(int i=0; i<10;i++)
{ printf(“%s %d\n”, message,i);
Sleep(1); }
Return 0;
}
Compile C program.
To compile a multithreaded program using gcc, we need to link it with the pthreads library. Following is the
command used to compile the program.
gcc -o hello hello.c -lpthread
This command will invoke the GNU C compiler to compile the file hello.c and output (-o) the result to an
executable called hello Execute the program.
Type the command
./hello
This should result in the output
Hello World
Lab Exercise(s):
2. Create threads message as many times as user wants to create threads by using array of threads and loop.
Threads should display message that is passed through argument.
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
CPU Scheduling – FCFS and SJF
Implementation of CPU scheduling algorithms; FCFS and SJF.
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Implementation of CPU scheduling algorithms; FCFS and SJF.
THEORY
First Come First Serve is a Non-preemptive Scheduling algorithm where each process is executed according to its
arrival time.
Step 1 : Input the number of processes required to be scheduled using FCFS, burst time for each process and its
arrival time.
Step 2 : Using enhanced bubble sort technique, sort the all given processes in ascending order according to arrival
time in a ready queue.
Step 3 : Calculate the Finish Time, Turn Around Time and Waiting Time for each process which in turn help to
calculate Average Waiting Time and Average Turn Around Time required by CPU to schedule given set of process
using FCFS.
Step 4 : Process with less arrival time comes first and gets scheduled first by the CPU.
Step 5 : Calculate the Average Waiting Time and Average Turn Around Time.
Step 6 : Stop.
Sample Run:
Enter total number of processes (maximum 20): 3
Enter Process Arrival Time
P[1]:0
P[2]:0
P[3]:0
Enter Process Burst Time
P[1]:24
P[2]:3
P[3]:3
Process Burst Time Waiting Time Turnaround Time
P[1] 24 0 24
P[2] 3 24 27
P[3] 3 27 30
Average Waiting Time:17
Average Turnaround Time:27
Shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting process with the smallest
execution time to execute next. SJN is a non-preemptive algorithm.
Step 1 : Input the number of processes required to be scheduled using SJF, burst time for each process and its
arrival time.
Step 2 : Using selection sort technique, sort the all given processes in ascending order according to burst time in
ascending order.
Step 3 : Calculate the Finish Time, Turn Around Time and Waiting Time for each process which in turn help to
calculate Average Waiting Time and Average Turn Around Time required by CPU to schedule given set of process
Step 4 : Process with less Burst and arrival time comes first and gets scheduled first by the CPU.
Step 5 : Calculate the Average Waiting Time and Average Turn Around Time.
Step 6 : Stop
Sample Run:
Enter number of process: 4
Enter Burst Time: p1:4 p2:8 p3:3 p4:7
Enter Process Arrival Time
P[1]:0
P[2]:0
P[3]:0
P[4]:0
Lab Exercise(s):
Objective:
CPU Scheduling – Round Robin and Priority
: Implementation of CPU scheduling algorithms; Priority and Round Robin.
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Implementation of CPU scheduling algorithms; Priority and Round Robin.
THEORY
Implementation –
1. First input the processes with their arrival time, burst time and priority.
2. Sort the processes, according to arrival time if two process arrival time is same then
sort according process priority if two process priority are same then sort according to
process number.
3. Now simply apply FCFS algorithm.
Each process will be executed according to its priority. Calculate the waiting time
and turnaround time of each of the processes accordingly.
(ii) ct[i]=t;
Lab Exercise(s):
Objective:
Semaphore Mechanism in Operating Systems
To Implementation of a Classical problem (Producer-Consumer) using semaphores.
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Implementation of a Classical problem (Producer-Consumer) using semaphores.
THEORY
Semaphore is a simply a variable. This variable is used to solve critical section problem and to achieve process
synchronization in the multi-processing environment.
The two most common kinds of semaphores are counting semaphores and binary semaphores. Counting
semaphore can take non-negative integer values and Binary semaphore can take the value 0 & 1. only. A
semaphore can only be accessed using the following operations: wait() and signal().
wait(Semaphore
s) { while
(s==0); /* wait
until s>0 */ s=s-
1; }
signal(Semaphore
s){ s=s+1;
}
[ In python, acquire() and release() provide wait() and signal() functionality,
respectively]
Python’s Semaphore
using class threading.Semaphore(s)
acquire()
Obtain a semaphore. The process is blocked until the semaphore is available.
release()
Release the semaphore and if another thread is waiting for the semaphore, wake up that
thread.
The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the value 1.
The empty and full semaphores count the number of empty and full buffers. The semaphore empty is initialized to
the value n (n =5 in this example); the semaphore full is initialized to the value 0.
buf = []
empty = threading.Semaphore(5)
full = threading.Semaphore(0)
mutex = threading.Lock()
def producer():
nums = range(5)
global buf
num = random.choice(nums)
empty.acquire()
mutex.acquire() # added
buf.append(num)
print("Produced", num, buf)
mutex.release() # added
full.release()
time.sleep(1)
def consumer():
global buf
full.acquire()
mutex.acquire() # added
num = buf.pop(0)
print("Consumed", num, buf)
mutex.release() # added
empty.release()
time.sleep(2)
producerThread1 = threading.Thread(target=producer)
consumerThread1 = threading.Thread(target=consumer)
producerThread2 = threading.Thread(target=producer)
consumerThread2 = threading.Thread(target=consumer)
consumerThread1.start()
consumerThread2.start()
producerThread1.start()
producerThread2.start()
Lab Exercise(s):
1. Write a python program that demonstrates the synchronization of Readers and Writer Problem using
semaphores.
2. Write a python program that demonstrates the synchronization of Consumer producer Bounded Buffer
Problem using semaphores.
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
Inter process communication (IPC) using Pipe
Multiprocessing in Python, Implement Pipe using os and multiprocessing module
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Multiprocessing in Python, Implement Pipe using os and multiprocessing module in Python.
THEORY
Inter process communication (IPC) is a mechanism which allows processes to communicate with each other and
synchronize their actions.
There are several different ways to implement IPC, for example pipe, shared memory, message passing. IPC is
set of programming interfaces, used by programs to communicate between series of processes.
Pipe
Pipe is a communication medium between two or more related or interrelated processes. It can be either within one
process or a communication between the child and the parent processes. Communication can also be multi-level
such as communication between the parent, the child and the grand-child, etc. Communication is achieved by one
process writing into the pipe and other reading from the pipe.
Pipe communication is viewed as only one-way communication i.e., either the parent process writes and the child
process reads or vice-versa but not both. However, what if both the parent and the child need to write and read from
the pipes simultaneously, the solution is a two-way communication using pipes. Two pipes are required to establish
two-way communication.
os.pipe() method in Python is used to create a pipe. A pipe is a method to pass information from one process to
another process. It offers only one-way communication and the passed information is held by the system until it is
read by the receiving process. os.pipe(), Return a pair of file descriptors (r, w) usable for reading and writing,
respectively.
Just like other programming languages, Python also supports the implementation of pipes.
import os
r, w = os.pipe()
pid = os.fork()
if pid > 0:
os.close(r)
print("Parent process is writing")
text = "Hello child process"
os.write(w, text)
else:
os.close(w)
# Read the text written by parent process
print("\nChild Process is reading")
r = os.fdopen(r)
print("Read text:", r.read())
os.fdopen(fd, *args, **kwargs)
Return an open file object connected to the file descriptor fd.
A file descriptor (FD) is a small non-negative integer that helps in identifying an open file within a process while
using input/output resources like network sockets or pipes.
Python method write() writes the string str to file descriptor fd. Return the number of bytes actually written.
Syntax
Following is the syntax for write() method : os.write(fd, str)
Multiprocessing
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer
system”. Multiprocessing can significantly increase the performance of the program.
In Python , multiprocessing is a package that supports spawning processes using an API similar to the threading
module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global
Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the
programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows.
Exchanging objects between processes
When using multiple processes, one generally uses message passing for communication between processes and
avoids having to use any synchronization primitives like locks.
For passing messages one can use Pipe() (for a connection between two processes) or a queue (which allows
multiple producers and consumers).
multiprocessing.Pipe([duplex])
Returns a pair (conn1, conn2) of Connection objects representing the ends of a pipe.
If duplex is True (the default) then the pipe is bidirectional. If duplex is False then the pipe is unidirectional
The Pipe() function returns a pair of connection objects connected by a pipe which by default is duplex (two-way).
For example:
from multiprocessing import Process, Pipe
def f(conn):
conn.send([30, None, 'hello Students'])
conn.close()
if __name__ == '__main__':
parent_conn, child_conn = Pipe()
p = Process(target=f, args=(child_conn,))
p.start()
print(parent_conn.recv())
p.join()
The two connection objects returned by Pipe() represent the two ends of the pipe. Each connection object has
send() and recv() methods (among others). Note that data in a pipe may become corrupted if two processes (or
threads) try to read from or write to the same end of the pipe at the same time. Of course, there is no risk of
corruption from processes using different ends of the pipe at the same time.
Lab Exercise(s):
1. Modify Example code to pass your name from parent to child process and display it by concatenating
with your roll number and department. Take your name as user input and pass it as command line
argument.
2. Write a Python program in which child process put the square of each number from 1 to 5 (exclusive) on
pipe. After the child process has finished its execution, we get the data from the pipe in parent process.
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
Inter process communication (IPC) using Shared Memory
To Implement Shared Memory using multiprocessing module in Python
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: Implement Shared Memory using multiprocessing module in Python.
THEORY
Inter process communication (IPC)
Inter process communication (IPC) is a mechanism which allows processes to communicate with each other and
synchronize their actions.
There are several different ways to implement IPC, for example pipe, shared memory, message passing. IPC is set
of programming interfaces, used by programs to communicate between series of processes.
Shared memory
Shared memory is a memory shared between two or more processes.
Shared memory is the fastest inter-process communication mechanism. The operating system maps a memory
segment in the address space of several processes, so that several processes can read and write in that memory
segment without calling operating system functions.
Shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide
communication among them or avoid redundant copies.
Return a **ctypes object allocated from shared memory. By default, the return value is actually a synchronized
wrapper for the object. The object itself can be accessed via the value attribute of a Value. typecode_or_type
determines the type of the returned object: it is either a ctypes type or a one character typecode of the kind used by
the array module. *args is passed on to the constructor for the type.
If lock is True (the default) then a new recursive lock object is created to synchronize access to the value. If lock is a
Lock or RLock object then that will be used to synchronize access to the value. If lock is False then access to the
returned object will not be automatically protected by a lock, so it will not necessarily be “process-safe”.
** ctypes is a foreign function library for Python. It provides C compatible data types, and
allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in
pure Python.
def f(n):
n.value = 3.1415927
if __name__ == '__main__':
num = Value('d', 0.0)
p = Process(target=f, args=(num,))
p.start()
p.join()
print(num.value)
a[i] = -a[i]
if __name__ == '__main__':
arr = Array('i', range(10))
p = Process(target=f, args=(arr))
p.start()
p.join()
print(arr[:])
The 'd' and 'i' arguments used when creating num and arr are typecodes of the kind used by the array module: 'd'
indicates a double precision float and 'i' indicates a signed integer.
Lab Exercise(s):
1. Start five processes using multiprocessing.Process objects , each process will update shared memory Value
object using their own target function (callable object to be invoked by the run() method). After execution of all
child processes, parent process should display the value of the object.
2. Generate 10 random numbers between 0 and 10, and calculate square of each number such that process#1
calculates square of first five numbers and process#2 calculates square of remaining five numbers, Store the
square results in an array (shared memory region) using multiprocessing module.
Usman Institute of Technology
Department of Computer Science
Fall 2022
Objective:
Deadlock Avoidance
To Implement the Bankers algorithm for the purpose of deadlock avoidance.
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: To Implement the Bankers algorithm for the purpose of deadlock avoidance.
THEORY
Deadlock
Deadlock (DL) can be defined as the permanent blocking of a set of processes that either compete for system
resources or communicate with each other. In a multiprogramming environment, several processes may compete
for a finite number of resources.
A process requests resources; if the resources are not available at that time, the process enters a wait state. A set
of processes is deadlocked when each process in the set is blocked awaiting an event (typically the freeing up of
some requested resource) that can only be triggered by another blocked process in the set. Deadlock is permanent
because none of the event is ever triggered.
Deadlock Avoidance
A deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that a circular-wait
condition can never exist. The resource allocation state is defined by the number of available and allocated resources
and the maximum demands of the processes.
When a new process enters the system, it must declare the maximum number of instances of each resource type
that it may need. This number may not exceed the total number of resources in the system. When a user requests a
set of resources, the system must determine whether the allocation of these resources will leave the system in a safe
state. If it will, the resources are allocated; otherwise, the process must wait until some other process releases
enough resources
Several data structures must be maintained to implement the banker’s algorithm. These data structures encode the
state of the resource-allocation system. We need the following data structures, where n is the number of processes
in the system and m is the number of resource types:
Safety Algorithm:
Algorithm for finding out whether or not a system is in a safe state Let Work and Finish be vectors of length m and n,
respectively.
Initialize Work =Available and Finish[i]=false for i=0,1,...,n−1.
Find an index i such that both
1. Finish[i] ==false
2. Needi ≤Work If no such i exists, goto step4.
3. Work =Work + Allocationi , Finish[i]=true , Go to step 2.
4. If Finish[i] ==true for all i, then the system is in a safe state.
Resource-Request Algorithm:
The algorithm for determining whether requests can be safely granted.
If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is allocated its resources.
However, if the new state is unsafe, then Pi must wait for Request i, and the old resource allocation state is restored.
Lab Exercise(s):
1- Write a Python Program to implement Banker’s Algorithm (Safety Algorithm) for a snapshot (as per the
inputs):
Inputs
• Number of processes
• Number of resources
• Available quantity of each resource after allocation of resources.
• Allocated quantity of each resource assigned to each process.
• Available resources or total resources
Output
Identify the state of system (safe/un-safe).
2- Write a Bash script that takes user input for number of processes and number of resources and passes
them into above python script (required in Task#1) to find the safety sequence. Assuming that other inputs are
provided inside Python Program.
Objective:
Open Ended LAB
To construct GUI based script that behaves like an Operating System
Name of Student
Student ID
Marks Obtained
Remarks
Signature
Objective: To construct GUI based script that behaves like an Operating System.
Requirements:
Your Operating System should have the following properties.
1. It allows you to create folders and files.
2. It allows you to change rights of your files.
3. It can help you in searching files.
4. It allows you to create processes and threads that perform specific tasks
For example
-Create a process that sorts array in ascending order.
-Create multiple threads that may help in solving matrix operations etc.
5. It allows you to display processes like a task manager in Windows and should
allow to kill any selected process.
6. Allows to open applications like Firefox, Image viewer etc.
7. Allows to share data between processes such that output of one process
becomes input of another. Provide sub menu to select Process1 and Process2
that gives input or input or vice versa.
8. Allows to write Linux C/Python programs to create your own process that can
perform any desirable task. Provide options in user sub-menu to execute and
delete that program.
Use dialog boxes, file browsers, and other common "windowing" controls and techniques to interact
with your users for more natural conversations so that “You present information, ask for a response,
and react accordingly 😊”
Your script should have Main menu to select operations mentioned above. Main menu items may have
further sub-menu for more natural and friendlier UI.
For Example:
Your script (OS) should use C/Python programs when creating processes, threads and killing processes.
-You can add further options into your script to make your OS more useful.
-Make necessary assumptions to design your OS that implements all the required options.