Operating System Module III
Operating System Module III
Operating System Services for Process Management & Scheduling: Introduction, Process
Creation, Termination & Other Issues, Threads, Multithreading, Types of Threads, Schedulers,
Types of Schedulers, Types of Scheduling, Scheduling Algorithms, Types of Scheduling
Algorithms.
Process Creation
Process Termination
• Process executes last statement and then asks the operating system to delete it using the
exit() system call.
▪ Returns status data from child to parent (via wait())
▪ Process’ resources are deallocated by operating system
• Parent may terminate the execution of children processes using the abort() system call.
Some reasons for doing so:
What is Thread?
A thread is a flow of execution through the process code, with its own program counter that keeps
track of which instruction to execute next, system registers which hold its current working
variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open
files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving
performance of operating system by reducing the overhead thread is equivalent to a classical
process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of control. Threads have been successfully used in implementing
network servers and web server. They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure shows the working of a
single-threaded and a multithreaded process.
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Advantages
Disadvantages
The Kernel maintains context information for the process as a whole and for individuals threads
within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs
thread creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.
Advantages
• Kernel can simultaneously schedule multiple threads from the same process on multiple
processes.
• If one thread in a process is blocked, the Kernel can schedule another thread of the same
process.
Disadvantages
• Kernel threads are generally slower to create and manage than the user threads.
• Transfer of control from one thread to another within the same process requires a mode
switch to the Kernel.
The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user
threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor
machine. This model provides the best accuracy on concurrency and when a thread performs a
blocking system call, the kernel can schedule another thread for execution.
Many-to-one model maps many user level threads to one Kernel-level thread. Thread
management is done in user space by the thread library. When thread makes a blocking system
call, the entire process will be blocked. Only one thread can access the Kernel at a time, so
multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that the
system does not support them, then the Kernel threads use the many-to-one relationship modes.
There is one-to-one relationship of user-level thread to the kernel-level thread. This model
provides more concurrency than the many-to-one model. It also allows another thread to run
when a thread makes a blocking system call. It supports multiple threads to execute in parallel on
microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread.
OS/2, windows NT and windows 2000 use one to one relationship model.
Process Scheduling
• Process scheduling is selecting one process for execution out of all the ready processes.
• When a computer is multiprogrammed, it has multiple processes competing for the CPU at
the same time. If only one CPU is available, then a choice has to be made regarding which
process to execute next. This decision making process is known as scheduling and the part
of the OS that makes this choice is called a scheduler. The algorithm it uses in making this
choice is called scheduling algorithm.
➢ Job queue – This queue consists of all processes in the system those processes
are entered to the system as new processes.
➢ Ready queue – This queue consists of all processes residing in main memory,
ready and waiting to execute.
➢ Device queues – This queue consists of processes waiting for an I/O device.
Each device has its own device queue.
➢ Processes migrate among the various queues
Medium-term scheduler
• The medium-term scheduler schedules the processes as intermediate level of scheduling
• Remove process from memory, store on disk, and later bring back in from disk to continue
execution: swapping
• The process is swapped out & swapped in later by medium term scheduler
• Swapping may be used to improve process mix and to free up some memory in
uncontrollable circumstances.
Context Switching
• Context switching is a process that involves switching of the CPU from one process or
task to another.
• In this phenomenon, the execution of the process that is present in the running state is
suspended by the kernel and another process that is present in the ready state is executed
by the CPU.
• When switching perform in the system, it stores the old running process's status in the form
of registers and assigns the CPU to a new process to execute its tasks.
• When a process of high priority comes in the ready state. In this case, the execution of
the running process should be stopped and the higher priority process should be given
the CPU for execution.
• When an interruption occurs then the process in the running state should be stopped and
the CPU should handle the interrupt before doing something else.
• When a transition between the user mode and kernel mode is required then you have to
perform the context switching.
The process of context switching involves a number of steps. The following diagram depicts the
process of context switching between the two processes P1 and P2.
In the below figure, you can see that initially, the P1 process is running on the CPU to execute its
task, and at the same time, another process, P2, is in the ready state. If an error or interruption has
occurred or the process requires input/output, the P1 process switches its state from running to the
waiting state. Before changing the state of the process P1, context switching saves the context of
the process P1 in the form of registers and the program counter to the PCB1. After that, it loads
the state of the P2 process from the ready state of the PCB2 to the running state.
Similarly, process P2 is switched off from the CPU so that the process P1 can resume execution.
P1 process is reloaded from PCB1 to the running state to resume its task at the same point.
Otherwise, the information is lost, and when the process is executed again, it starts execution at
the initial level.
The time involved in the context switching of one process by other is called the Context Switching
Time.
Basic Concepts
Dispatcher
• Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
➢ switching context
➢ switching to user mode
➢ jumping to the proper location in the user program to restart that program
• Dispatch latency – time it takes for the dispatcher to stop one process and start another
running
Scheduling Criteria
• Arrival Time (AT) – The time at which the process arrives in the system
• Burst Time (BT) – The amount of time the process runs on CPU.
• Response time – amount of time from arrival till first time process gets the CPU.
• Scheduling Length - max(CT) – min(AT)
• Throughput – number of processes that complete their execution per time unit ie n / l
• Multiple-level queues are not an independent scheduling algorithm. They make use of
other existing algorithms to group and schedule jobs with common characteristics.
➢ Multiple queues are maintained for processes with common characteristics.
➢ Each queue can have its own scheduling algorithms.
➢ Priorities are assigned to each queue.
• For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.
• A process can move between the various queues; aging can be implemented this way
• Multilevel-feedback-queue scheduler defined by the following parameters:
➢ number of queues
➢ scheduling algorithms for each queue
➢ method used to determine when to upgrade a process
➢ method used to determine when to demote a process
➢ method used to determine which queue a process will enter when that process
needs service
• Three queues: