Lesson 3: Process Management: IT 311: Applied Operating System
Lesson 3: Process Management: IT 311: Applied Operating System
Management
WHAT IS A PROCESS?
The most fundamental concept in a modern OS is the
process. The principal function of the OS is to create,
manage, and terminate processes. While processes are
active, the OS must see that each is allocated time for
execution by the processor, coordinate their activities,
manage conflicting demands, and allocate system resources
to processes.
Processes States
For a program to be executed, a process, or task, is created
for that program.
It characterize the behavior of an individual process by
listing the sequence of instructions that execute for that
process. Such a listing is referred to as a trace of the
process. We can characterize behavior of the processor by
showing how the traces of the various processes are
interleaved.
Small dispatcher program that switches the processor from
one process to another.
Process Control
Modes of Execution
OS manages processes, we need to distinguish between the
mode of processor execution normally associated with the OS and
that normally associated with user programs.
Most processors support at least two modes of execution. These
would include reading or altering a control register, such as the
PSW, primitive I/O instructions, and instructions that relate to
memory management.
The less-privileged mode is often referred to as the user mode,
because user programs typically would execute in this mode.
The more-privileged mode is referred to as the system mode,
control mode, or kernel mode.
TYPES OF THREADS
There are two broad categories of thread implementation: user-level
threads (ULTs) and kernel-level threads (KLTs).
User-Level Threads – ULT facility, all of the work of thread
management is done by the application and the kernel is not aware of
the existence of threads.
Figure (a) illustrates the
pure ULT and KLT approach. Any
application can be programmed
to be multithreaded by using a
threads library, which is a
package of routines for ULT
management.
The threads library
contains code for creating and
destroying threads, for passing
messages and data between
threads, for scheduling thread
execution, and for saving and
restoring thread contexts.
IT 311: Applied Operating System
Threads
PRINCIPLES OF DEADLOCK
Deadlock can be defined as the permanent blocking of a set
of processes that either compete for system resources or
communicate with each other. A set of processes is
deadlocked when each process in the set is blocked awaiting
an event (typically the freeing up of some requested
resource) that can only be triggered by another blocked
process in the set.
Deadlock is permanent because none of the events is ever
triggered. Unlike other problems in concurrent process
management, there is no efficient solution in the general
case.
DEADLOCK PREVENTION
The strategy of deadlock prevention is, simply put, to design a
system in such a way that the possibility of deadlock is excluded.
Techniques related to each of the four conditions.
The mutual exclusion allow multiple accesses for reads but only
exclusive access for writes. Even in this case, deadlock can occur if more
than one process requires write permission.
The hold-and-wait condition can be prevented by requiring that a process
request all of its required resources at one time and blocking the process
until all requests can be granted simultaneously.
The no preemption this approach is practical only when applied to
resources whose state can be easily saved and restored later, as is the
case with a processor.
The circular wait condition can be prevented by defining a linear ordering
of resource types.
DEADLOCK AVOIDANCE
An approach to solving the deadlock problem that differs
subtly from deadlock prevention is deadlock.
The term avoidance is a bit confusing. In fact, one could consider the strategies
discussed in this section to be examples of deadlock prevention because they
indeed prevent the occurrence of a deadlock.
Two approaches to deadlock avoidance:
Do not start a process if its demands might lead to deadlock.
Do not grant an incremental resource request to a process if this
allocation might lead to deadlock.
DEADLOCK DETECTION
Deadlock detection strategies do not limit resource access or restrict
process actions. With deadlock detection, requested resources are
granted to processes whenever possible. Periodically, the OS performs
an algorithm that allows it to detect the circular wait condition.
Deadlock Detection Algorithm
A check for deadlock can be made as frequently as each resource
request, or less frequently, depending on how likely it is for a deadlock to
occur.
Checking at each resource request has two advantages: It leads to early
detection, and the algorithm is relatively simple because it is based on
incremental changes to the state of the system. On the other hand, such
frequent checks consume considerable processor time.