Introduction
to
Operating Systems - 2
PROF. K. ADISESHA
Process management
Process
management Process Concept
Threads
Process Scheduling
CPU Scheduling Algorithm
Inter process Communication
2
Process management
Introduction
Prof. K. Adisesha (Ph. D)
3
Introduction to Process:
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
➢ When a program is loaded into the main memory it becomes a process.
➢ It can be divided into four sections ─
❖ Stack: The process Stack contains the temporary data such as
method/function parameters, return address and local variables
❖ Heap: This is dynamically allocated memory to a process during its run
time
❖ Text: This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
❖ Data: This section contains the global and static variables.
Introduction
Prof. K. Adisesha (Ph. D)
4
Process Life Cycle:
When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
➢ A process can have one of the following five states at a time.
❖ Start
❖ Ready
❖ Running
❖ Waiting
❖ Terminated or Exit
Introduction
Prof. K. Adisesha (Ph. D)
5
Process Life Cycle:
➢ Start: This is the initial state when a process is first started/created.
➢ Ready: The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the OS so that they can run.
➢ Running: Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
➢ Waiting: Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
➢ Terminated or Exit: Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be removed from
main memory.
Process Control Block
Prof. K. Adisesha (Ph. D)
6
Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System for
every process.
➢ The PCB is identified by an integer process ID (PID).
➢ A PCB keeps all the information needed to keep track of
a process.
➢ The PCB is maintained for a process throughout its
lifetime, and is deleted once the process terminates.
Process Control Block
Prof. K. Adisesha (Ph. D)
7
Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System for
every process.
➢ Information associated with each process
❖ Process state
❖ Program counter
❖ CPU registers
❖ CPU scheduling information
❖ Memory-management information
❖ Accounting information
❖ I/O status information
Process Control Block
Prof. K. Adisesha (Ph. D)
8
CPU Switch From Process to Process:
Process Communication
Prof. K. Adisesha (Ph. D)
9
Context Switch:
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch.
➢ Context of a process represented in the PCB
➢ Context-switch time is overhead; the system does no useful work while switching
❖ The more complex the OS and the PCB -> longer the context switch
➢ Time dependent on hardware support
❖ Some hardware provides multiple sets of registers per CPU -> multiple contexts
loaded at once
Process Communication
Prof. K. Adisesha (Ph. D)
10
Process Creation:
Parent process create children processes, which, in turn create other processes,
forming a tree of processes.
➢ Generally, process identified and managed via a process identifier (pid)
➢ Resource sharing
❖ Parent and children share all resources
❖ Children share subset of parent’s resources
❖ Parent and child share no resources
➢ Execution
❖ Parent and children execute concurrently
❖ Parent waits until children terminate
Process Communication
Prof. K. Adisesha (Ph. D)
11
Process Termination:
Process executes last statement and asks the operating system to delete it (exit).
➢ Output data from child to parent (via wait)
➢ Process’ resources are deallocated by operating system
➢ Parent may terminate execution of children processes (abort)
❖ Child has exceeded allocated resources
❖ Task assigned to child is no longer required
❖ If parent is exiting
▪ Some operating systems do not allow child to continue if its parent terminates
▪ All children terminated - cascading termination
Process Communication
Prof. K. Adisesha (Ph. D)
12
Inter process Communication (IPC):
Processes within a system may be independent or cooperating, Cooperating process can
affect or be affected by other processes, including sharing data.
➢ Reasons for cooperating processes:
❖ Information sharing
❖ Computation speedup
❖ Modularity
❖ Convenience
➢ Cooperating processes need interprocess communication (IPC)
➢ Two models of IPC
❖ Shared memory
❖ Message passing
Process Communication
Prof. K. Adisesha (Ph. D)
13
Inter process Communication (IPC):
Inter process communication is a mechanism which allows processes to communicate
with each other and synchronize their actions.
➢ The communication between these processes can be seen as a method of co-operation
between them.
➢ Some of the methods to provide IPC:
❖ Message Queue.
❖ Shared Memory.
❖ Signal.
❖ Shared Files and Pipe
❖ Socket
Threading
Prof. K. Adisesha (Ph. D)
14
Thread:
A thread is a single sequential flow of execution of tasks of a process so it is also known
as thread of execution or thread of control.
➢ Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks.
➢ Thread is often referred to as a lightweight process.
➢ Need of Thread:−
❖ It takes far less time to create a new thread in an existing process than to create a new
process.
❖ Threads can share the common data, they do not need to use Inter- Process
communication.
❖ Context switching is faster when working with threads.
❖ It takes less time to terminate a thread than a process.
Threading
Prof. K. Adisesha (Ph. D)
15
Thread:
Threads are implemented in following two ways :
➢ User Level Threads − User managed threads.
❖ The operating system does not recognize the user-level thread.
❖ User threads can be easily implemented and it is implemented by the user.
❖ If a user performs a user-level thread blocking operation, the whole process is
blocked.
➢ Kernel Level Threads − The kernel thread recognizes the operating system.
❖ There is a thread control block and process control block in the system for
each thread and process in the kernel-level thread.
❖ The kernel-level thread is implemented by the operating system.
❖ The kernel knows about all the threads and manages them.
Threading
Prof. K. Adisesha (Ph. D)
16
Multithreading Models:
Some operating system provide a combined user level thread and Kernel level thread
facility.
➢ In a combined system, multiple threads within the same application can run in parallel on
multiple processors and a blocking system call need not block the entire process.
➢ The idea is to achieve parallelism by dividing a process into multiple threads.
➢ For example, in a browser, multiple tabs can be different threads.
➢ Multithreading models are three types
❖ Many to many relationship
❖ Many to one relationship
❖ One to one relationship
Threading
Prof. K. Adisesha (Ph. D)
17
Thread:
Difference between User-Level & Kernel-Level Thread.
User-Level Threads Kernel-Level Thread
User-level threads are faster to create and
manage.
Kernel-level threads are slower to create and
manage.
Implementation is by a thread library at the
user level.
Operating system supports creation of Kernel
threads.
User-level thread is generic and can run on
any operating system.
Kernel-level thread is specific to the
operating system.
Multi-threaded applications cannot take
advantage of multiprocessing.
Kernel routines themselves can be
multithreaded.
Threading
Prof. K. Adisesha (Ph. D)
18
Advantages of Thread:
The various advantages of using Thread are:
➢ Threads minimize the context switching time.
➢ Use of threads provides concurrency within a process.
➢ Efficient communication.
➢ It is more economical to create and context switch threads.
➢ Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency..
Swapping
Prof. K. Adisesha (Ph. D)
19
Swapping:
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory to secondary storage and make that memory available to other processes.
➢ Swapping is also known as a technique for memory compaction.
➢ The total time taken by swapping process includes
the time it takes to move the entire process to a
secondary disk and then to copy the process back to
memory
Process Scheduling
Prof. K. Adisesha (Ph. D)
20
Process Scheduling:
Maximize CPU use, quickly switch processes onto CPU for time sharing.
➢ Process scheduler selects among available processes for next execution on CPU
➢ Maintains scheduling queues of processes
❖ Job queue – set of all processes in the system
❖ Ready queue – set of all processes residing in main memory, ready and waiting to
execute
❖ Device queues – set of processes waiting for an I/O device
❖ Processes migrate among the various queues
Process Scheduling
Prof. K. Adisesha (Ph. D)
21
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU on the basis of a particular strategy.
➢ Process scheduling is an essential part of a Multiprogramming operating systems used
to select among available processes for next execution on CPU
➢ The OS maintains all PCBs in Process Scheduling Queues.
❖ Job queue: This queue keeps all the processes in the
system.
❖ Ready queue: This queue keeps a set of all processes
residing in main memory, ready and waiting to execute.
❖ Device queues: The processes which are blocked due to
unavailability of an I/O device constitute this queue
Process Scheduling
Prof. K. Adisesha (Ph. D)
22
Representation of Process Scheduling:
Processes can be described as either:
➢ I/O-bound process – spends more time
doing I/O than computations, many short
CPU bursts
➢ CPU-bound process – spends more time
doing computations; few very long CPU
bursts
Process Scheduling
Prof. K. Adisesha (Ph. D)
23
Schedulers:
Schedulers are special system software which handle process scheduling in various ways.
➢ The main task is to select the jobs to be submitted into the system and to decide which process to run.
➢ Schedulers are of three types:
❖ Long-Term Scheduler:
▪ It is also called a job scheduler.
▪ A long-term scheduler determines which programs are admitted to the system for processing.
❖ Short-Term Scheduler:
▪ It is also called as CPU scheduler.
▪ Its main objective is to increase system performance in accordance with the chosen set of criteria.
❖ Medium-Term Scheduler:
▪ Medium-term scheduling is a part of swapping.
▪ It removes the processes from the memory. It reduces the degree of multiprogramming.
Process Scheduling
Prof. K. Adisesha (Ph. D)
24
Context Switch:
When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process via a context switch.
➢ Context of a process represented in the PCB
➢ Context-switch time is overhead; the system does no useful work while switching
❖ The more complex the OS and the PCB -> longer the context switch
➢ Time dependent on hardware support
❖ Some hardware provides multiple sets of registers per CPU -> multiple contexts
loaded at once
Process Scheduling
Prof. K. Adisesha (Ph. D)
25
Scheduling Criteria:
Schedulers selects the processes in memory that are ready to execute, and allocates the
CPU based on certain scheduling Criterias.
➢ Scheduling Criteria are based on:
❖ CPU utilization – keep the CPU as busy as possible
❖ Throughput – No. of processes that complete their execution per time unit
❖ Turnaround time – amount of time to execute a particular process
❖ Waiting time – amount of time a process has been waiting in the ready queue
❖ Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
Process Scheduling
Prof. K. Adisesha (Ph. D)
26
Scheduling algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms.
➢ These algorithms are either non-preemptive or preemptive
➢ There are popular process scheduling algorithms:
❖ First-Come, First-Served (FCFS) Scheduling
❖ Shortest-Job-Next (SJN) Scheduling
❖ Priority Scheduling
❖ Round Robin(RR) Scheduling
❖ Multiple-Level Queues Scheduling.
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
27
First Come First Serve (FCFS):
In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
➢ It is a non-preemptive, pre-emptive scheduling algorithm.
➢ Easy to understand and implement.
➢ Its implementation is based on FIFO queue.
➢ Poor in performance as average wait time is high..
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
28
First Come First Serve (FCFS):
In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
Example: FCFS Scheduling
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
29
Shortest Job Next (SJN):
This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢ This is a non-preemptive, pre-emptive scheduling algorithm.
➢ Best approach to minimize waiting time.
➢ Easy to implement in Batch systems where required CPU time is known in advance.
➢ Impossible to implement in interactive systems where required CPU time is not
known.
➢ The processer should know in advance how much time process will take..
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
30
Shortest Job Next (SJN):
This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢ Example:
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
31
Priority Scheduling:
Priority scheduling is a priority based algorithm and one of the most common
scheduling algorithms in batch systems.
➢ Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
➢ Processes with same priority are executed on first come first served basis.
➢ Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
32
Priority Based Scheduling:
Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
➢ Example:
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
33
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢ Round Robin is the preemptive process scheduling algorithm.
➢ Each process is provided a fix time to execute, it is called a quantum.
➢ Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
➢ Context switching is used to save states of preempted processes.
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
34
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢ Example:
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
35
Multiple-Level Queues Scheduling:
Multiple-level queues are not an independent scheduling algorithm.
➢ They make use of other existing algorithms to group and schedule jobs with common
characteristics.
❖ Multiple queues are maintained for processes with common characteristics.
❖ Each queue can have its own scheduling algorithms.
❖ Priorities are assigned to each queue.
Process Synchronization
Prof. K. Adisesha (Ph. D)
36
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢ Process Synchronization ensures a perfect co-ordination among the process.
➢ Maintaining data consistency demands mechanisms to ensure synchronized execution
of cooperating processes.
➢ Process Synchronization can be provided by using several different tools like:
❖ Semaphores
❖ Mutual Exclusion or Mutex
❖ Monitor
Process Synchronization
Prof. K. Adisesha (Ph. D)
37
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢ Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
➢ On the basis of synchronization, processes are categorized as one of the following two types:
❖ Independent Process: Execution of one process does not affects the execution of
other processes.
❖ Cooperative Process: Execution of one process affects the execution of other
processes.
Process Synchronization
Prof. K. Adisesha (Ph. D)
38
Process Synchronization:
Race Condition:
➢ When serval process access and manipulates the same data at the same time, they may enter into a race
condition
➢ Race condition occurs among process that share common storage for read and write.
➢ Race condition occurs due to improper synchronization of shared memory access.
Critical section problem:
➢ Critical section is a code segment that can be accessed by only one process at a time.
➢ Critical section contains shared variables which need to be synchronized to maintain consistency of data
variables.
➢ Any solution to the critical section problem must satisfy three requirements:
❖ Mutual Exclusion
❖ Progress
❖ Bounded Waiting.
Process Synchronization
Prof. K. Adisesha (Ph. D)
39
Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore
can be signaled by another thread.
➢ A semaphore uses two atomic operations, wait and signal for process synchronization.
➢ Classical problems of Synchronization with Semaphore Solution:
❖ Bounded-buffer (or Producer-Consumer) Problem
❖ Dining- Philosophers Problem
❖ Readers and Writers Problem
❖ Sleeping Barber Problem
Process Synchronization
Prof. K. Adisesha (Ph. D)
40
Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem. This problem is
generalized in terms of the Producer-Consumer problem.
➢ Solution to this problem is, creating two counting semaphores “full” and “empty” to
keep track of the current number of full and empty buffers respectively.
➢ Producers produce a product and consumers consume the product, but both use of one
of the containers each time.
Process Synchronization
Prof. K. Adisesha (Ph. D)
41
Dining- Philosophers Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
➢ There is one chopstick between each philosopher.
➢ A philosopher may eat if he can pickup the two chopsticks
adjacent to him.
➢ One chopstick may be picked up by any one of its adjacent
followers but not both.
➢ This problem involves the allocation of limited resources to a
group of processes in a deadlock-free and starvation-free
manner.
Process Synchronization
Prof. K. Adisesha (Ph. D)
42
Readers and Writers Problem:
Suppose that a database is to be shared among several concurrent processes. We
distinguish between these two types of processes by referring to the former as readers
and to the latter as writers.
➢ Precisely in OS we call this situation as the readers-writers problem.
➢ Problem parameters:
❖ One set of data is shared among a number of processes.
❖ Once a writer is ready, it performs its write. Only one writer may write at a time.
❖ If a process is writing, no other process can read it.
❖ If at least one reader is reading, no other process can write.
❖ Readers may not write and only read.
Process Synchronization
Prof. K. Adisesha (Ph. D)
43
Sleeping Barber Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
➢ Barber shop with one barber, one barber chair and N
chairs to wait in.
➢ When no customers the barber goes to sleep in barber
chair and must be woken when a customer comes in.
➢ When barber is cutting hair new customers take empty
seats to wait, or leave if no vacancy.
Deadlocks
Prof. K. Adisesha (Ph. D)
44
Deadlock:
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
➢ In operating systems when there are two or more processes
that hold some resources and wait for resources held by
other(s).
➢ Example:
❖ Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting
for resource 1.
Deadlocks
Prof. K. Adisesha (Ph. D)
45
Deadlock:
Methods for handling deadlock .
➢ There are three ways to handle deadlock:
❖ Deadlock prevention or avoidance: The idea is to not let the system into a
deadlock state. by using strategy of “Avoidance”, we have to make an assumption.
❖ Deadlock detection and recovery: Let deadlock occur, then do preemption to
handle it once occurred.
❖ Ignore the problem altogether: If deadlock is very rare, then let it happen and
reboot the system. This is the approach that both Windows and UNIX take.
Discussion
Prof. K. Adisesha (Ph. D)
46
Queries ?
Prof. K. Adisesha
9449081542

Operating System-2 by Adi.pdf

  • 1.
    Introduction to Operating Systems -2 PROF. K. ADISESHA Process management
  • 2.
    Process management Process Concept Threads ProcessScheduling CPU Scheduling Algorithm Inter process Communication 2 Process management
  • 3.
    Introduction Prof. K. Adisesha(Ph. D) 3 Introduction to Process: A process is basically a program in execution. The execution of a process must progress in a sequential fashion. ➢ When a program is loaded into the main memory it becomes a process. ➢ It can be divided into four sections ─ ❖ Stack: The process Stack contains the temporary data such as method/function parameters, return address and local variables ❖ Heap: This is dynamically allocated memory to a process during its run time ❖ Text: This includes the current activity represented by the value of Program Counter and the contents of the processor's registers. ❖ Data: This section contains the global and static variables.
  • 4.
    Introduction Prof. K. Adisesha(Ph. D) 4 Process Life Cycle: When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also not standardized. ➢ A process can have one of the following five states at a time. ❖ Start ❖ Ready ❖ Running ❖ Waiting ❖ Terminated or Exit
  • 5.
    Introduction Prof. K. Adisesha(Ph. D) 5 Process Life Cycle: ➢ Start: This is the initial state when a process is first started/created. ➢ Ready: The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the OS so that they can run. ➢ Running: Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions. ➢ Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. ➢ Terminated or Exit: Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory.
  • 6.
    Process Control Block Prof.K. Adisesha (Ph. D) 6 Process Control Block (PCB): A Process Control Block is a data structure maintained by the Operating System for every process. ➢ The PCB is identified by an integer process ID (PID). ➢ A PCB keeps all the information needed to keep track of a process. ➢ The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
  • 7.
    Process Control Block Prof.K. Adisesha (Ph. D) 7 Process Control Block (PCB): A Process Control Block is a data structure maintained by the Operating System for every process. ➢ Information associated with each process ❖ Process state ❖ Program counter ❖ CPU registers ❖ CPU scheduling information ❖ Memory-management information ❖ Accounting information ❖ I/O status information
  • 8.
    Process Control Block Prof.K. Adisesha (Ph. D) 8 CPU Switch From Process to Process:
  • 9.
    Process Communication Prof. K.Adisesha (Ph. D) 9 Context Switch: When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch. ➢ Context of a process represented in the PCB ➢ Context-switch time is overhead; the system does no useful work while switching ❖ The more complex the OS and the PCB -> longer the context switch ➢ Time dependent on hardware support ❖ Some hardware provides multiple sets of registers per CPU -> multiple contexts loaded at once
  • 10.
    Process Communication Prof. K.Adisesha (Ph. D) 10 Process Creation: Parent process create children processes, which, in turn create other processes, forming a tree of processes. ➢ Generally, process identified and managed via a process identifier (pid) ➢ Resource sharing ❖ Parent and children share all resources ❖ Children share subset of parent’s resources ❖ Parent and child share no resources ➢ Execution ❖ Parent and children execute concurrently ❖ Parent waits until children terminate
  • 11.
    Process Communication Prof. K.Adisesha (Ph. D) 11 Process Termination: Process executes last statement and asks the operating system to delete it (exit). ➢ Output data from child to parent (via wait) ➢ Process’ resources are deallocated by operating system ➢ Parent may terminate execution of children processes (abort) ❖ Child has exceeded allocated resources ❖ Task assigned to child is no longer required ❖ If parent is exiting ▪ Some operating systems do not allow child to continue if its parent terminates ▪ All children terminated - cascading termination
  • 12.
    Process Communication Prof. K.Adisesha (Ph. D) 12 Inter process Communication (IPC): Processes within a system may be independent or cooperating, Cooperating process can affect or be affected by other processes, including sharing data. ➢ Reasons for cooperating processes: ❖ Information sharing ❖ Computation speedup ❖ Modularity ❖ Convenience ➢ Cooperating processes need interprocess communication (IPC) ➢ Two models of IPC ❖ Shared memory ❖ Message passing
  • 13.
    Process Communication Prof. K.Adisesha (Ph. D) 13 Inter process Communication (IPC): Inter process communication is a mechanism which allows processes to communicate with each other and synchronize their actions. ➢ The communication between these processes can be seen as a method of co-operation between them. ➢ Some of the methods to provide IPC: ❖ Message Queue. ❖ Shared Memory. ❖ Signal. ❖ Shared Files and Pipe ❖ Socket
  • 14.
    Threading Prof. K. Adisesha(Ph. D) 14 Thread: A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of execution or thread of control. ➢ Each thread of the same process makes use of a separate program counter and a stack of activation records and control blocks. ➢ Thread is often referred to as a lightweight process. ➢ Need of Thread:− ❖ It takes far less time to create a new thread in an existing process than to create a new process. ❖ Threads can share the common data, they do not need to use Inter- Process communication. ❖ Context switching is faster when working with threads. ❖ It takes less time to terminate a thread than a process.
  • 15.
    Threading Prof. K. Adisesha(Ph. D) 15 Thread: Threads are implemented in following two ways : ➢ User Level Threads − User managed threads. ❖ The operating system does not recognize the user-level thread. ❖ User threads can be easily implemented and it is implemented by the user. ❖ If a user performs a user-level thread blocking operation, the whole process is blocked. ➢ Kernel Level Threads − The kernel thread recognizes the operating system. ❖ There is a thread control block and process control block in the system for each thread and process in the kernel-level thread. ❖ The kernel-level thread is implemented by the operating system. ❖ The kernel knows about all the threads and manages them.
  • 16.
    Threading Prof. K. Adisesha(Ph. D) 16 Multithreading Models: Some operating system provide a combined user level thread and Kernel level thread facility. ➢ In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. ➢ The idea is to achieve parallelism by dividing a process into multiple threads. ➢ For example, in a browser, multiple tabs can be different threads. ➢ Multithreading models are three types ❖ Many to many relationship ❖ Many to one relationship ❖ One to one relationship
  • 17.
    Threading Prof. K. Adisesha(Ph. D) 17 Thread: Difference between User-Level & Kernel-Level Thread. User-Level Threads Kernel-Level Thread User-level threads are faster to create and manage. Kernel-level threads are slower to create and manage. Implementation is by a thread library at the user level. Operating system supports creation of Kernel threads. User-level thread is generic and can run on any operating system. Kernel-level thread is specific to the operating system. Multi-threaded applications cannot take advantage of multiprocessing. Kernel routines themselves can be multithreaded.
  • 18.
    Threading Prof. K. Adisesha(Ph. D) 18 Advantages of Thread: The various advantages of using Thread are: ➢ Threads minimize the context switching time. ➢ Use of threads provides concurrency within a process. ➢ Efficient communication. ➢ It is more economical to create and context switch threads. ➢ Threads allow utilization of multiprocessor architectures to a greater scale and efficiency..
  • 19.
    Swapping Prof. K. Adisesha(Ph. D) 19 Swapping: Swapping is a mechanism in which a process can be swapped temporarily out of main memory to secondary storage and make that memory available to other processes. ➢ Swapping is also known as a technique for memory compaction. ➢ The total time taken by swapping process includes the time it takes to move the entire process to a secondary disk and then to copy the process back to memory
  • 20.
    Process Scheduling Prof. K.Adisesha (Ph. D) 20 Process Scheduling: Maximize CPU use, quickly switch processes onto CPU for time sharing. ➢ Process scheduler selects among available processes for next execution on CPU ➢ Maintains scheduling queues of processes ❖ Job queue – set of all processes in the system ❖ Ready queue – set of all processes residing in main memory, ready and waiting to execute ❖ Device queues – set of processes waiting for an I/O device ❖ Processes migrate among the various queues
  • 21.
    Process Scheduling Prof. K.Adisesha (Ph. D) 21 Process Scheduling: The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU on the basis of a particular strategy. ➢ Process scheduling is an essential part of a Multiprogramming operating systems used to select among available processes for next execution on CPU ➢ The OS maintains all PCBs in Process Scheduling Queues. ❖ Job queue: This queue keeps all the processes in the system. ❖ Ready queue: This queue keeps a set of all processes residing in main memory, ready and waiting to execute. ❖ Device queues: The processes which are blocked due to unavailability of an I/O device constitute this queue
  • 22.
    Process Scheduling Prof. K.Adisesha (Ph. D) 22 Representation of Process Scheduling: Processes can be described as either: ➢ I/O-bound process – spends more time doing I/O than computations, many short CPU bursts ➢ CPU-bound process – spends more time doing computations; few very long CPU bursts
  • 23.
    Process Scheduling Prof. K.Adisesha (Ph. D) 23 Schedulers: Schedulers are special system software which handle process scheduling in various ways. ➢ The main task is to select the jobs to be submitted into the system and to decide which process to run. ➢ Schedulers are of three types: ❖ Long-Term Scheduler: ▪ It is also called a job scheduler. ▪ A long-term scheduler determines which programs are admitted to the system for processing. ❖ Short-Term Scheduler: ▪ It is also called as CPU scheduler. ▪ Its main objective is to increase system performance in accordance with the chosen set of criteria. ❖ Medium-Term Scheduler: ▪ Medium-term scheduling is a part of swapping. ▪ It removes the processes from the memory. It reduces the degree of multiprogramming.
  • 24.
    Process Scheduling Prof. K.Adisesha (Ph. D) 24 Context Switch: When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process via a context switch. ➢ Context of a process represented in the PCB ➢ Context-switch time is overhead; the system does no useful work while switching ❖ The more complex the OS and the PCB -> longer the context switch ➢ Time dependent on hardware support ❖ Some hardware provides multiple sets of registers per CPU -> multiple contexts loaded at once
  • 25.
    Process Scheduling Prof. K.Adisesha (Ph. D) 25 Scheduling Criteria: Schedulers selects the processes in memory that are ready to execute, and allocates the CPU based on certain scheduling Criterias. ➢ Scheduling Criteria are based on: ❖ CPU utilization – keep the CPU as busy as possible ❖ Throughput – No. of processes that complete their execution per time unit ❖ Turnaround time – amount of time to execute a particular process ❖ Waiting time – amount of time a process has been waiting in the ready queue ❖ Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)
  • 26.
    Process Scheduling Prof. K.Adisesha (Ph. D) 26 Scheduling algorithms: A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. ➢ These algorithms are either non-preemptive or preemptive ➢ There are popular process scheduling algorithms: ❖ First-Come, First-Served (FCFS) Scheduling ❖ Shortest-Job-Next (SJN) Scheduling ❖ Priority Scheduling ❖ Round Robin(RR) Scheduling ❖ Multiple-Level Queues Scheduling.
  • 27.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 27 First Come First Serve (FCFS): In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first serve basis. ➢ It is a non-preemptive, pre-emptive scheduling algorithm. ➢ Easy to understand and implement. ➢ Its implementation is based on FIFO queue. ➢ Poor in performance as average wait time is high..
  • 28.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 28 First Come First Serve (FCFS): In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first serve basis. Example: FCFS Scheduling
  • 29.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 29 Shortest Job Next (SJN): This is also known as shortest job first, associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. ➢ This is a non-preemptive, pre-emptive scheduling algorithm. ➢ Best approach to minimize waiting time. ➢ Easy to implement in Batch systems where required CPU time is known in advance. ➢ Impossible to implement in interactive systems where required CPU time is not known. ➢ The processer should know in advance how much time process will take..
  • 30.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 30 Shortest Job Next (SJN): This is also known as shortest job first, associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. ➢ Example:
  • 31.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 31 Priority Scheduling: Priority scheduling is a priority based algorithm and one of the most common scheduling algorithms in batch systems. ➢ Each process is assigned a priority. Process with highest priority is to be executed first and so on. ➢ Processes with same priority are executed on first come first served basis. ➢ Priority can be decided based on memory requirements, time requirements or any other resource requirement.
  • 32.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 32 Priority Based Scheduling: Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch systems. ➢ Example:
  • 33.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 33 Round Robin Scheduling: Each process gets a small unit of CPU time (time quantum), after this time has elapsed, the process is preempted and added to the end of the ready queue. ➢ Round Robin is the preemptive process scheduling algorithm. ➢ Each process is provided a fix time to execute, it is called a quantum. ➢ Once a process is executed for a given time period, it is preempted and other process executes for a given time period. ➢ Context switching is used to save states of preempted processes.
  • 34.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 34 Round Robin Scheduling: Each process gets a small unit of CPU time (time quantum), after this time has elapsed, the process is preempted and added to the end of the ready queue. ➢ Example:
  • 35.
    Scheduling Algorithms Prof. K.Adisesha (Ph. D) 35 Multiple-Level Queues Scheduling: Multiple-level queues are not an independent scheduling algorithm. ➢ They make use of other existing algorithms to group and schedule jobs with common characteristics. ❖ Multiple queues are maintained for processes with common characteristics. ❖ Each queue can have its own scheduling algorithms. ❖ Priorities are assigned to each queue.
  • 36.
    Process Synchronization Prof. K.Adisesha (Ph. D) 36 Process Synchronization: Process Synchronization means sharing system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. ➢ Process Synchronization ensures a perfect co-ordination among the process. ➢ Maintaining data consistency demands mechanisms to ensure synchronized execution of cooperating processes. ➢ Process Synchronization can be provided by using several different tools like: ❖ Semaphores ❖ Mutual Exclusion or Mutex ❖ Monitor
  • 37.
    Process Synchronization Prof. K.Adisesha (Ph. D) 37 Process Synchronization: Process Synchronization means sharing system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. ➢ Process synchronization problem arises in the case of Cooperative process also because resources are shared in Cooperative processes. ➢ On the basis of synchronization, processes are categorized as one of the following two types: ❖ Independent Process: Execution of one process does not affects the execution of other processes. ❖ Cooperative Process: Execution of one process affects the execution of other processes.
  • 38.
    Process Synchronization Prof. K.Adisesha (Ph. D) 38 Process Synchronization: Race Condition: ➢ When serval process access and manipulates the same data at the same time, they may enter into a race condition ➢ Race condition occurs among process that share common storage for read and write. ➢ Race condition occurs due to improper synchronization of shared memory access. Critical section problem: ➢ Critical section is a code segment that can be accessed by only one process at a time. ➢ Critical section contains shared variables which need to be synchronized to maintain consistency of data variables. ➢ Any solution to the critical section problem must satisfy three requirements: ❖ Mutual Exclusion ❖ Progress ❖ Bounded Waiting.
  • 39.
    Process Synchronization Prof. K.Adisesha (Ph. D) 39 Semaphores: A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by another thread. ➢ A semaphore uses two atomic operations, wait and signal for process synchronization. ➢ Classical problems of Synchronization with Semaphore Solution: ❖ Bounded-buffer (or Producer-Consumer) Problem ❖ Dining- Philosophers Problem ❖ Readers and Writers Problem ❖ Sleeping Barber Problem
  • 40.
    Process Synchronization Prof. K.Adisesha (Ph. D) 40 Bounded-buffer (or Producer-Consumer) Problem: Bounded Buffer problem is also called producer consumer problem. This problem is generalized in terms of the Producer-Consumer problem. ➢ Solution to this problem is, creating two counting semaphores “full” and “empty” to keep track of the current number of full and empty buffers respectively. ➢ Producers produce a product and consumers consume the product, but both use of one of the containers each time.
  • 41.
    Process Synchronization Prof. K.Adisesha (Ph. D) 41 Dining- Philosophers Problem: The Dining Philosopher Problem states that K philosophers seated around a circular table with one chopstick between each pair of philosophers. ➢ There is one chopstick between each philosopher. ➢ A philosopher may eat if he can pickup the two chopsticks adjacent to him. ➢ One chopstick may be picked up by any one of its adjacent followers but not both. ➢ This problem involves the allocation of limited resources to a group of processes in a deadlock-free and starvation-free manner.
  • 42.
    Process Synchronization Prof. K.Adisesha (Ph. D) 42 Readers and Writers Problem: Suppose that a database is to be shared among several concurrent processes. We distinguish between these two types of processes by referring to the former as readers and to the latter as writers. ➢ Precisely in OS we call this situation as the readers-writers problem. ➢ Problem parameters: ❖ One set of data is shared among a number of processes. ❖ Once a writer is ready, it performs its write. Only one writer may write at a time. ❖ If a process is writing, no other process can read it. ❖ If at least one reader is reading, no other process can write. ❖ Readers may not write and only read.
  • 43.
    Process Synchronization Prof. K.Adisesha (Ph. D) 43 Sleeping Barber Problem: The Dining Philosopher Problem states that K philosophers seated around a circular table with one chopstick between each pair of philosophers. ➢ Barber shop with one barber, one barber chair and N chairs to wait in. ➢ When no customers the barber goes to sleep in barber chair and must be woken when a customer comes in. ➢ When barber is cutting hair new customers take empty seats to wait, or leave if no vacancy.
  • 44.
    Deadlocks Prof. K. Adisesha(Ph. D) 44 Deadlock: Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. ➢ In operating systems when there are two or more processes that hold some resources and wait for resources held by other(s). ➢ Example: ❖ Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
  • 45.
    Deadlocks Prof. K. Adisesha(Ph. D) 45 Deadlock: Methods for handling deadlock . ➢ There are three ways to handle deadlock: ❖ Deadlock prevention or avoidance: The idea is to not let the system into a deadlock state. by using strategy of “Avoidance”, we have to make an assumption. ❖ Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it once occurred. ❖ Ignore the problem altogether: If deadlock is very rare, then let it happen and reboot the system. This is the approach that both Windows and UNIX take.
  • 46.
    Discussion Prof. K. Adisesha(Ph. D) 46 Queries ? Prof. K. Adisesha 9449081542