Os Chapter 5TH
Os Chapter 5TH
Processes on a UNIX system terminate by executing the exit system call. An exiting process
enters the zombie state, relinquishes its resources, and dismantles its context except for its slot in
the process table. The syntax for the call is
exit (status);
Where the value of status is returned to the parent process for its examination; Processes may call
exit explicitly or implicitly at the end of a program: the startup routine linked with all C
programs calls exit when the program returns from the main function, the entry point of all
programs. Alternatively, the kernel may invoke exit internally for a process on receipt of
uncaught signals as discussed above. If so, the value of status is the signal number.
The system imposes no time limit on the execution of a process, and processes frequently exist
for a long time.
The kernel also resets the process group number to 0 for processes in the process group, because
it is possible that another process will later get the process ID of the process that just exited and
that it too will be a process group leader. Processes that belonged to the old process group will
not belong to the later process group.
The kernel then goes through the open file descriptors, closing each one internally with algorithm
close, and releases the inodes it bad accessed for the current directory and changed root (if it
exists) via algorithm input.
• Provide execution profiling capability for the kernel and for user processes
The clock handler is therefore fast, so that the critical time periods when other interrupts are
blocked is as small as possible. Figure 7.5 shows the algorithm for handling clock interrupts.
Restarting the Clock: When the clock interrupts the system, most machines require that the
clock be reprimed by software instructions so that it will interrupt the processor again after a
suitable interval.
Internal System Timeouts: Some kernel operations, particularly device drivers and network
protocols, require invocation of kernel functions on a real-time basis.
For instance, a process may put a terminal into raw mode so that the kernel satisfies user read
requests at fixed intervals instead of waiting for the user to type a carriage return.
The user has no direct control over the entries in the callout table; various kernel algorithms
make entries as needed.
There are several time-related system calls, stime, time, times, and alarm. The first two deal with
global system time, and the latter two deals with time for individual processes.
Stime allows the superuser to set a global kernel variable to a value that gives the current time:
stime(pvalue);
Where pvalue points to a long integer that gives the time as measured in seconds from midnight
before (00:00:00) January 1, 1970, GMT. The clock interrupt handler increments the kernel
variable once a second. Time retrieves the time as set by stime: time(tloc:);
Where tloc points to a location in the user process for the return value; Time returns this value
from the system call, too. Commands such as date use time to determine the current time.
Times retrieves the cumulative times that the calling process spent executing in user mode and
kernel mode and the cumulative times that all zombie children had executed in user mode and
kernel mode. The syntax for the call is
times(tbuffer) struct tms *tbuffer; where the structure tms contains the
retrieved times and is defined by struct tms { /* time t is the data
structure for time */
time_t tms_utime; /* user time of process */
time_t tms_stime; /* kernel time of process */
time_t tms_cutime; /* user time of children */
time_t tms_cstime /* kernel time of children */
};
Times return the elapsed time "from an arbitrary point in the past," usually the time of system
boot.
User processes can schedule alarm signals using the alarm system call. The common factor in all
the time related system calls i:s their reliance on the system clock: the kernel manipulates various
time counters when handling clock interrupts and initiates appropriate action.
Q.4 What is the use fork() system call? Explain the sequence of operations kernel executes
for fork.
The only way for a user to create a’’’’’’’’’’’’’ new process in the UNIX operating system is to
invoke the fork system call. The process that invokes fork is called the parent process, and the
newly created process is called the child process. The syntax for the fork system call is
pid = fork();
On return from the fork system call, the two processes have identical copies of their user-level
context except for the return value pid. In the parent process, pid is the child process ID; in the
child process, pid is 0. Process 0, created internally by the kernel when the system is booted, is
the only process not created via fork. The kernel does the following sequence of operations for
fork.
3. It makes a logical copy of the context of the parent process. Since certain portions of
a process, such as the text region, may be shared between processes, the kernel can
sometimes increment a region reference count instead of copying the region to a new
physical location in memory.
4. It increments file and inode table counters for files associated with the process.
5. It returns the ID number of the child to the parent process, and a 0 value to the child
process.
The implementation of the fork system call is not trivial, because the child process appears to
start its execution sequence out of thin air. The algorithm for fork (figure 6.2) varies slightly for
demand paging and swapping systems; the ensuing discussion is based on traditional swapping
systems but will point out the places that change for demand paging systems.
It also assumes that the system has enough main memory available to store the child process.
Input: none
increment counts on current directory inode and changed root (if applicable);
make copy of parent context (u area, text, data, stack) in memory; push
dummy system level context layer onto child system level context;
dummy context contains data allowing child process to recognize itself
and start running from here when scheduled;
if (executing process is parent process)
{
change child state to "ready to run;" return(child
ID); /* from system to user */
}
The kernel first ascertains that it has available resources to complete the fork successfully. On a
swapping system, it needs space either in memory or on disk to hold the child process; on a
paging system, it has to allocate memory for auxiliary tables such as page tables. If the resources
are unavailable, the fork call fails.
The kernel finds a slot in the process table to start constructing the context of the child process
and makes sure that the user doing the fork does not have too many processes already running. It
also picks a unique ID number for the new process, one greater than the most recently assigned
ID number. If another process already has the proposed ID number, the kernel attempts to assign
the next higher ID number.
When the ID numbers reach a maximum value, assignment starts from 0 again. Since most
processes execute for a short time, most ID numbers are not in use when ID assignment wraps
around.
The system imposes a (configurable) limit on the number of processes a user can simultaneously
execute so that no user can steal many process table slots, thereby preventing other users from
creating new processes.
Similarly, ordinary users cannot create a process that would occupy the last remaining slot in the
process table, or else the system could effectively deadlock. That is, the kernel cannot guarantee
that existing processes will exit naturally and, therefore, no new processes could be created,
because all the process table slots are in use.
Q.5 what is the use of signal? Explain types of signal.
Signals inform processes of the occurrence of asynchronous events. Processes may send each
other signal with the kill system call, or the kernel may send signals internally. There are 19
signals in the System V (Release 2) UNIX system that can be classified as follows:
Signals having to do with the termination of a process, sent when a process exits
or when a process invokes the signal system call with the death of child
parameter;
Signals having to do with the unrecoverable conditions during a system call, such
as running out of system resources during exec after the original address space
have been released;
Signals originating from a process in user mode, such as when a process wishes to
receive an alarm signal after a period of time, or when processes send arbitrary
signals to each other with the kill system call;
Signals related to terminal interaction such as when a user hangs up a terminal (or
the “carrier" signal drops on such a line for any reason), or when a user presses
the “break" or “delete" keys on a terminal keyboard;
The treatment of signals has several facets, namely how the kernel sends a signal to a process,
how the process handles a signal, and how a process controls its reaction to signals. To send a
signal to a process, the kernel sets a bit in the signal field of the process table entry,
corresponding to the type of signal received.
Types of signals-
The administrator may set switches on the computer console to specify the address of a special
hardcoded bootstrap program or just push a single button that instructs the machine to load a
bootstrap program from its microcode. This program may consist of only a few instructions that
instruct the machine to execute another program.
On UNIX systems, the bootstrap procedure eventually reads the boot block (block 0) of a disk,
and loads it into memory. The program contained in the boot block loads the kernel from the file
system (from the file "/unix", for example, or another name specified by an administrator).
After the kernel is loaded in memory, the boot program transfers control to the start address of
the kernel, and the kernel starts running (algorithm start Figure 6.10).
The kernel initializes its internal data structures. For instance, it constructs the linked lists of free
buffers and inodes, constructs hash queues for buffers and inodes, initializes region structures,
page table entries, and so on. After completing the initialization phase, it mounts the root file
system onto root ("/") and fashions the environment for process 0, creating a u area, initializing
slot 0 in the process table and making root the current directory of process 0, among other things.
When the environment of process 0 is set up, the system is running as process 0. Process 0 forks,
invoking the fork algorithm directly from the kernel, because it is executing in kernel mode. The
new process, process 1, running in kernel mode, creates its user-level context by allocating a data
region and attaching it to its address space.
It grows the region to its proper size and copies code from the kernel address space to the new
region: This code now forms the user-level context of process 1. Process 1 then sets up the saved
user register context, "returns" from kernel to user mode, and executes the code it had just copied
from the kernel.
The init process is a process dispatcher, spawning processes that allow users to log in to the
system, among others. lnit reads the file “/etc/inittab”, for instructions about which processes to
spawn. The file “/etc/inittab” contains lines that contain an "id," a state identifier (single user,
multi-user, etc.), an “action”, and a program specification. lnit reads the file and, if the state in
which it was invoked matches the state identifier ofa line, creates a process that executes the
given program specification.
Q.7 Draw and Explain User level and Kernel Level Priority.
Each process table entry contains a priority field. The priority is a function of recent CPU usage,
where the priority is lower if a process has recently used the CPU. The range of priorities can be
partitioned in two classes: user priorities and kernel priorities. It is shown in the diagram below:
Each priority has a queue of processes logically associated with it. The processes with user-level
priorities were preempted on their return from the kernel to user mode, and processes with
kernel-level priorities achieved them in the *sleep* algorithm. User priorities are below a
threshold value and kernel priorities are above a threshold value. Processes with low kernel
priority wake up on receipt of a signal, but processes with high kernel priority continue to sleep.
The user level 0 is the highest user level priority and user level n is the lowest.
* It assigns priority to a process about to go to sleep. This priority solely depends on the
reason for the sleep. Processes that sleep in lower-level algorithms tend to cause more system
bottlenecks the longer they are inactive; hence they receive a higher priority than process that
would cause fewer system bottlenecks. For instance, a process sleeping and waiting for the
completion of disk I/O has a higher priority than a process waiting for a free buffer. Because the
first process already has a buffer and it is possible that after the completion of I/O, it will release
the buffer and other resources, resulting into more resource availability for the system.
* The kernel adjusts the priority of a process that returns from kernel mode to user mode.
The priority must be lowered to a user level priority. The kernel penalizes the executing process
in fairness to other processes, since it had just used valuable kernel resources.
* The clock handler adjusts the priorities of all processes in user mode at 1 second intervals
(on System V) and causes the kernel to go through the scheduling algorithm to prevent a process
from monopolizing use of the CPU.
When a process is running, every clock tick increments a field in the process table which records
the recent CPU usage of the process. Once a second, the clock handler also adjusts the recent
CPU usage of each process according to a decay function on system V:
When it recomputes recent CPU usage, the clock handler recalculates the priority of every
process in the "preempted but ready-to-run" state according to the formula.
If a process is in critical region of the kernel (i.e. the process execution level is risen), the kernel
does not recompute the priorities on the one second clock tick, it recomputes the priorities at the
next clock tick after the critical region is finished.
The scheduler on the UNIX system belongs to the general class of operating system schedulers
knows as *round robin with multilevel feedback*. That means, when kernel schedules a process
and the time quantum expires, it preempts the process and adds it to one of the several priority
queues.
```
/* Algorithm: schedule_process
* Input: none
* Output: none
*/
{
while (no process picked to execute)
{
for (every process on run queue) pick
highest priority process that is loaded in
memory; if (no process eligible to execute)
idle the machine;
// interrupt takes machine out of idle state
}
remove chosen process from run queue; switch
context to that of chosen process, resume its
execution;
}
```
This algorithm is executed at the conclusion of a context switch. It selects the highest priority
process from the states "ready to run, loaded in memory" and "preempted". If several processes
have the same priority, it schedules the one which is "ready to run" for a long time.
Example:
Process A, B, and C are created and initially given the priority 60, which is the highest user-level
priority. Assuming that the processes make no system calls, and process A gets scheduled first,
after one second the CPU count of A becomes 60. And when it is recalculated, it becomes 30
(decay = 60 / 2). And the priority becomes 75 (priority = 30 / 2 + 60). Then B gets scheduled and
the calculation continues every second.
Q.9 Explain How Kernel Prevents a process from monopolizing the use of CPU in Unix
system V.
Kernel profiling gives a measure of how much time the system is executing in user mode versus
kernel mode, and how much time it spends executing individual routines in the kernel. The
kernel profile driver monitors the relative performance of kernel modules by sampling system
activity at the time of a clock interrupt.
The profile driver has a list of kernel addresses to sample, usually addresses of kernel functions;
a process had previously down-loaded these addresses by writing the profile driver. If kernel
profiling is enabled, the clock interrupt handler invokes the interrupt handler of the profile driver,
which determines whether the processor mode at the time of the interrupt was user or kernel.
If the mode was user, the profiler increments a count for user execution, but if the mode was
kernel, it increments an internal counter corresponding to the program counter. User processes
can read the profile driver to obtain the kernel counts and do statistical measurements.
Users can profile execution of processes at user level with the profil system call:
Where buff is the address of an array in user space, bufsize is the size of the array, offset
is the virtual address of a user subroutine (usually, the first), and scale is a factor that maps user
virtual addresses into the array.
Keeping Time
The kernel increments a timer variable at every clock interrupts, keeping time in clock ticks from
the time the system was booted. The kernel uses the timer variable to return a time value for the
time system call, and to calculate the total (real time) execution time of a process. The kernel
saves the process start time in its u area when a process is created in the fork system call, and it
subtracts that value from the current time when the process exits, giving the real execution time
of the process. Another timer variable set by the slime system call and updated once a second,
keeps track of calendar time.