0% found this document useful (0 votes)
77 views

Chapter 2. Process and Process Management

The document discusses processes and process management. It defines a process as a program in execution. It describes the various components of a process including program code, process state, process control block (PCB), and context switching. It discusses process scheduling, which involves managing ready, device, and job queues. Scheduling algorithms like FCFS, SJFS, priority, and round robin are also covered. The document differentiates between CPU-bound and I/O-bound processes and the need for a balance. It introduces the concept of medium-term scheduling to improve process mix by swapping processes in and out of memory.

Uploaded by

Yamini Gahlot
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views

Chapter 2. Process and Process Management

The document discusses processes and process management. It defines a process as a program in execution. It describes the various components of a process including program code, process state, process control block (PCB), and context switching. It discusses process scheduling, which involves managing ready, device, and job queues. Scheduling algorithms like FCFS, SJFS, priority, and round robin are also covered. The document differentiates between CPU-bound and I/O-bound processes and the need for a balance. It introduces the concept of medium-term scheduling to improve process mix by swapping processes in and out of memory.

Uploaded by

Yamini Gahlot
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 64

Process and Process Management

Chapter-2

1 6/27/2019
Objectives
 To introduce the concept of a process -- a program in
execution, which forms the basis of all computation
 To describe the various features of processes,
including scheduling, creation and termination, and
communication
 To describe communication in client-server systems

2 6/27/2019
Points to cover- Ref book-Galvin
 Process concept and process states

 CPU and I/O bound

 operating system services for process and thread management

 CPU scheduler- short medium, long-term, dispatcher,


scheduling:- preemptive and non-preemptive

 scheduling algorithms- FCFS, SJFS, shortest remaining time, RR,


priority scheduling, atomic transactions.

3 6/27/2019
Process concepts-Aim
 Process is program in execution.

 A process is the unit of work in a modern time-sharing


system.

 By switching the CPU between processes, the operating


system can make the computer more productive.

4 6/27/2019
Process concept
 An operating system executes a variety of programs:
 Batch system – jobs
 Time-shared systems – user programs or tasks
 The terms job and process are same
 Process – a program in execution; process
Execution must progress in sequential fashion
 Multiple parts of process in memory:
 The program code, also called text section,
Current activity including program counter,
processor registers
 Process Stack containing temporary data
 Function parameters, return addresses, local variables
 Data section containing global variables
 Heap containing memory dynamically allocated during run time
5
6/27/2019
Process concept
 Program is passive entity stored on disk (executable file),
process is active
 Program becomes process when executable file loaded
into memory
 Execution of program started via GUI mouse clicks,
command line entry of its name, etc a.exe or a.out
 Two processes can be associated with same program,
are considered separate execution sequences.
 E.g several user ---runs different copies of mail-
program
 User---runs different copies of web browser
6
6/27/2019
Process in Memory

7
6/27/2019
How process is in Memory

8 6/27/2019
Process State
 As a process executes, it changes state
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a processor
 terminated: The process has finished execution

9
6/27/2019
Diagram of Process State
Process Control Block (PCB)
Information associated with each process
(also called task control block)
 Process state – new, running, waiting, etc
 Program counter – location(address) of
instruction to next execute
 CPU registers – contents of all process-
centric registers
 CPU scheduling information- priorities,
scheduling queue pointers
 Memory-management information –
memory allocated to the process
 Accounting information – CPU used, clock
time elapsed since start, time limits
 I/O status information – I/O devices
allocated to process, list of open files
Context Switch
 When CPU switches to another process, the system
must save the state of the old process and load the
saved state for the new process via a context switch

 Context of a process represented in the PCB

 Context-switch time is overhead; the system does no


useful work while switching

 Time dependent on hardware support

12 6/27/2019
CPU Switch From Process to Process
Threads
 So far, process has a single thread of execution e.g. word
processor
 Most modern operating systems have extended the
process concept to allow a process to have multiple
threads of execution and thus to perform more than one
task at a time.
 On a system that supports threads, the PCB is expanded
to include information for each thread.
Process Scheduling
Definition:

 The process scheduling is the activity of the process manager


that handles the removal of the running process from the
CPU and the selection of another process on the basis of a
particular strategy.

 Process scheduling is an essential part of a multiprogramming


operating system. Such operating systems allow more than
one process to be loaded into the executable memory at a
time and loaded process shares the CPU using time
multiplexing.
15 6/27/2019
Process Scheduling
 Maximize CPU use, quickly switch processes onto CPU
for time sharing
 User can interact with each program while running.
 Process scheduler selects among available processes for
next execution on CPU
 Maintains scheduling queues of processes
 Job queue – set of all processes in the system
 Ready queue – set of all processes residing in main
memory, ready and waiting to execute
 Device queues – set of processes waiting for an I/O
device
 Processes migrate among the various queues
Ready Queue And Various I/O Device Queues
Process Scheduling Queues
Scheduling queues refers to queues of processes or devices.
When the process enters into the system, then this process
is put into a job queue. This queue consists of all processes
in the system. The operating system also maintains other
queues such as device queue. Device queue is a queue for
which multiple processes are waiting for a particular I/O
device. Each device has its own device queue.
This figure shows the queuing diagram of process
scheduling.
Queue is represented by rectangular box.
The circles represent the resources that serve the queues.
The arrows indicate the process flow in the system.
18 6/27/2019
Process Scheduling Queues
Queues are of two types
 Ready queue
 Device queue
 A newly arrived process is put in the ready queue. Processes
waits in ready queue for allocating the CPU. Once the CPU is
assigned to a process, then that process will execute. While
executing the process, any one of the following events can
occur.
 The process could issue an I/O request and then it would be
placed in an I/O queue.
 The process could create new sub process and will wait for its
termination.
 The process could be removed forcibly from the CPU, as a result
of interrupt and put back in the ready queue.
19 6/27/2019
Representation of Process Scheduling
 Queuing diagram represents queues, resources, flows
Schedulers
 Short-term scheduler (or CPU scheduler) – selects which
process should be executed next and allocates CPU
 Sometimes the only scheduler in a system
 Short-term scheduler is invoked frequently (milliseconds) 
(must be fast)
10/(100 + 10) = 9 percent of the CPU is being used
 (wasted) simply for scheduling the work.
 Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue
 Long-term scheduler is invoked infrequently (seconds,
minutes)  (may be slow)
 The long-term scheduler controls the degree of
multiprogramming
 Take more time for scheduling
I/O bound and CPU bound process
 In general, most processes can be described as either
 I/O bound or
 CPU bound.
 An I/O-bound process is one that spends more of its
time doing I/O than it spends doing computations.
 A CPU-bound process, in contrast, generates I/O
requests infrequently, using more of its time doing
computations.

 What happens if processes are only I/O bound or CPU


bound.
22 6/27/2019
I/O bound and CPU bound process
 It is important that the long-term scheduler select a good process
mix of I/O-bound and CPU-bound processes.

 If all processes are I/O bound, the ready queue will almost
always be empty, and the short-term scheduler will have little to
do.

 If all processes are CPU bound, the I/O waiting queue will
almost always be empty, devices will go unused and again the
system will be unbalanced.

 The system with the best performance will thus have a


combination of CPU-bound and I/O-bound processes.
23 6/27/2019
Addition of Medium Term Scheduling
Medium-term scheduler The key idea behind a medium-term scheduler
is that sometimes it can be advantageous to remove a process from
memory (and from active contention for the CPU) and thus reduce the
degree of multiprogramming.
 Remove process from memory, store on disk, bring back in from
disk to continue execution: swapping? Why needed
Why swapping needed?
 Swapping may be necessary to improve the process mix
 change in memory requirements has overcommitted
available memory, requiring memory to be freed up.

 What is difference between short term, long term and


medium term scheduler?

25 6/27/2019
Differences between schedulers
S.N Long Term Scheduler Short Term Scheduler Medium Term Scheduler

1 It is a job scheduler. It is a CPU scheduler. It is a process swapping scheduler.

Speed is lesser than short Speed is fastest among other Speed is in between both short and
2
term scheduler. two. long term scheduler.

It controls the degree of It provides lesser control over It reduces the degree of
3
multiprogramming. degree of multiprogramming. multiprogramming.

It is almost absent or
It is also minimal in time
4 minimal in time sharing It is a part of Time sharing systems.
sharing system.
system.

It selects processes from pool It can re-introduce the process into


It selects those processes which
5 and loads them into memory memory and execution can be
are ready to execute.
for execution. continued..
Operations on Processes
 System must provide mechanisms for:
 process creation,
 process termination,
Process Creation
• Why is Process Created?

• Processes are the means of running programs to perform tasks.


All required processes are not created when the computer gets
started , as users starts using computer they go on assigning new
tasks which then gets converted into processes.
• When is Process created?

• Computer start or restart

• Request by user to start new process by double clicking or


opening an executable file or issuing a run command etc.

• In response to a request from a running process to create a


child 6/27/2019
28
Process Creation
 Parent process create children processes, which, in turn create other
processes, forming a tree of processes
 Generally, process identified and managed via a process identifier
(pid) can be used as an index to access various attributes of a process
with in the kernel.
 Resource sharing options
 Parent and children share all resources
 Children share subset of parent’s resources
 Parent and child share no resources
 Execution options
 Parent and children execute concurrently
 Parent waits until children terminate
A Tree of Processes in Linux
init
pid = 1

login kthreadd sshd


pid = 8415 pid = 2 pid = 3028

bash khelper pdflush sshd


pid = 8416 pid = 6 pid = 200 pid = 3610

emacs tcsch
ps
pid = 9204 pid = 4005
pid = 9298
Process Creation (Cont.)
 Address space
 Child duplicate of parent
 Child has a program loaded into it(new process)
 UNIX examples
 fork() system call creates new process
 exec() system call used after a fork() to replace the process’
memory space with a new program.
Example of fork

32 6/27/2019
C Program Forking Separate Process
Process Termination
 Process executes last statement and then asks the operating
system to delete it using the exit() system call.
 Returns status data from child to parent (via wait())
 Process’ resources are deallocated by operating system
 Parent may terminate the execution of children processes
using the abort() system call. Some reasons for doing so:
 Child has exceeded allocated resources
 Task assigned to child is no longer required
 The parent is exiting and the operating systems does not allow
a child to continue if its parent terminates
Process Termination
 Some operating systems do not allow child to exists if its
parent has terminated. If a process terminates, then all its
children must also be terminated.
 cascading termination. All children, grandchildren, etc. are
terminated.
 The termination is initiated by the operating system.
 The parent process may wait for termination of a child
process by using the wait()system call. The call returns
status information and the pid of the terminated process
pid = wait(&status);
 If no parent waiting (did not invoke wait()) process is a
zombie
 If parent terminated without invoking wait , process is an
orphan
Threads (ch-4 Galvin)
 Thread is a unit of CPU utilization.
 A traditional (or heavyweight) process has a single thread of control. If
a process has multiple threads of control, it can perform more than one
task at a time.

 It consists of
 Thread ID
 Program counter
 Registers
 Stack

 It shares with other threads belonging


to the same process its code section, data section, and other operating-
system resources, such as open files and signals.

 An application typically is implemented as a separate process with


several threads of control
Single and Multithreaded Processes
e.g Multithreaded server architecture

38 6/27/2019
Process vs Threads
Benefits
 Responsiveness: Multithreading an interactive application may
increases responsiveness to the user.

 Resource Sharing: Threads share memory and resources of the


processes to which they belong.

 Economy: Process creation is expensive; In Solaris 2, creating a


process is 30 times slower than creating a thread and context
switching is five times slower

 Scalability: Benefits of multithreading increase in multiprocessor


systems because different threads can be scheduled to different
processors
User Threads
 Threads are supported at user level by User threads.

 Thread management done by user-level threads library: The


library provides support for thread creation, scheduling, and
management. Here kernel is not aware of user level threads.

 Three primary thread libraries:


 POSIX Pthreads
 Win32 threads
 Java threads
Kernel Threads
 Threads are supported at OS level by the Kernel directly
through kernel thread. Kernel performs the thread creation,
scheduling and management inside the kernel.

 There is no thread management code in the application area.


Kernel threads are supported directly by the operating system. Any
application can be programmed to be multithreaded. All of the
threads within an application are supported within a single process.

 The Kernel maintains context information for the process as a


whole and for individuals threads within the process.
Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel
space. Kernel threads are generally slower to create and
manage than the user threads.
Multithreading Models
 Many systems provide support for both user-level and kernel level
threads. This provides different multithreading models

 Many-to-One: maps many user level threads into one kernel level
thread.

 One-to-One: maps each user level thread to a kernel level thread

 Many-to-Many: Multiplexes many user level threads to a smaller or


equal number of kernel threads
 Has the advantages of both the many-to-one and one-to-one model
 Solaris2, IRIS, HP-UX takes this approach
Many-to-one model
 Many to one model maps many user level threads to one Kernel
level thread. Thread management is done in user space. When
thread makes a blocking system call, the entire process will be
blocked. Only one thread can access the Kernel at a time, so
multiple threads are unable to run in parallel on
multiprocessors.

 If the user level thread libraries are


implemented in the operating system
in such a way that system does not support
them then Kernel threads use the many
to one relationship modes.

45 6/27/2019
One-to-one Model
 There is one to one relationship of user level thread to
the kernel level thread. This model provides more concurrency
than the many to one model. It also another thread to run when a
thread makes a blocking system call. It support multiple
thread to execute in parallel on microprocessors.
 Disadvantage of this model is that creating user thread
requires the corresponding Kernel thread. OS/2, windows
NT and windows 2000 use one to one relationship model.
Many-to-Many Model
 In this model, many user level threads multiplexes to the
Kernel thread of smaller or equal numbers. The number of
Kernel threads may be specific to either a particular application or
a particular machine.
 Following diagram shows the many to many model. In this model,
developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallels on a
multiprocessor.
Threading Issues
 Semantics of fork() and exec() system calls.
 In a multithreaded program, when a fork() system call is executed by a thread,
does the new process duplicate all the threads or is the new process single
threaded.
 Depends on the implementation and application
 The exec () system call works the same way we saw earlier – i.e, it replaces the
the entire process including all threads

 Thread cancellation: deals with termination of a thread


 E.g. If a database search is performed by several threads concurrently, all threads
can be terminated once one thread returns the result.
e.g web browesr.
 Asynchronous cancellation: One thread immediately terminates the target thread.

 Deferred cancellation: The target thread can periodically check if it should


terminate.
Most Operating Systems allow a process or thread to be cancelled asynchronously
Threading Issues…
 Signal handling: Signal is used in UNIX systems to notify a process a particular event has
occurred. Signals follow the following pattern:
 A signal is generated by the occurrence of a particular event
 A generated signal is delivered to a process. Once delivered, the signal must be
handled

 Synchronous signals: e.g. signals delivered to the same process that performed the
operation causing the signal. For example, division by zero, illegal memory access

 Asynchronous signals: signals generated by an event external to the process are received
by the process asynchronously. For example, <control> <C>.

 Signal handlers:
 Default signal handlers provided by the OS kernel
 User-defined signal handlers

 Issues in handling signals:


 In single-threaded programs, signals are always delivered to the process. In
multithreaded programs who should be delivered the signal?
Thread Pooling
 Reduce the overhead involved in thread creation

 Instead of creating a thread for each service request, a set of threads


are created by the server and placed in a pool

 When a request for a service arrives, the service is passed to one of the
threads waiting in the pool.

 After the request is serviced, the thread is returned to the pool awaiting
more work

 If a request arrives and there are no threads awaiting in the queue, the
server waits until a thread becomes free

 Benefits of thread pooling:


 Faster servicing of the requests because no overhead involved in
thread creation
 Limits the total number of threads that exist at any one point in the
system
CPU Scheduling algorithms
 Basic Concepts
 scheduling:- preemptive and non-preemptive
 scheduling algorithms-
o FCFS,
o SJFS
o shortest remaining time
o RR
o priority scheduling
 For all examples refer problems completed in class.
Basic Concepts
 Scheduling of this kind is a fundamental
operating-system function. Almost all
computer resources are scheduled before
use. The CPU is, of course, one of the
primary computer resources. Thus, its
scheduling is central to operating-system
design.
 Maximum CPU utilization obtained with
multiprogramming
 CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and
I/O wait
 CPU burst followed by I/O burst
 CPU burst distribution is of main concern
CPU Scheduler
 Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
 Queue may be ordered in various ways
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 Scheduling under 1 and 4 is non-preemptive
 All other scheduling is preemptive
 Consider access to shared data
 Consider preemption while in kernel mode
 Consider interrupts occurring during crucial OS activities
Dispatcher
 Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart
that program

 The dispatcher should be as fast as possible, since it is


invoked during every process switch.

 Dispatch latency – time it takes for the dispatcher to stop


one process and start another running
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – no. of processes that complete their execution per
time unit
 Turnaround time – amount of time to execute a particular process:
The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of
the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
 Waiting time – amount of time a process has been waiting in the
ready queue
 Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
 The turnaround time is generally limited by the speed of the output
device.
Scheduling Algorithm Optimization Criteria
 Max CPU utilization
 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
First- Come, First-Served (FCFS) Scheduling
 With this scheme, the process that requests the CPU first is allocated
the CPU first. FCFS implementation easily managed with a FIFO
queue.
 When a process enters the ready queue, its PCB is linked onto the
tail of the queue. When the CPU is free, it is allocated to the process
at the head of the queue. The running process is then removed from
the queue.
 jobs are executed on first come, first serve basis.
 Easy to understand and implement.
 Poor in performance as average wait time is high.

57 6/27/2019
Shortest-Job-First (SJF) Scheduling
 Associate with each process the length of its next CPU
burst
 Use these lengths to schedule the process with the shortest
time
 SJF is optimal – gives minimum average waiting time for a
given set of processes
 The difficulty is knowing the length of the next CPU request
 Could ask the user
Two types
1. SJF (Non preemption)
2. SJF(Preemption) or Shortest Remaining Time First.
Determining Length of Next CPU Burst
 Can only estimate the length – should be similar to the
previous one
 Then pick process with shortest predicted next CPU burst

 Can be done by using the length of previous CPU bursts,


using exponential averaging
1. t n  actual length of n th CPU burst
2.  n 1  predicted value for the next CPU burst
3.  , 0    1 n 1   t n  1    n .
4. Define :

 Commonly, α set to ½
 Preemptive version called shortest-remaining-time-first
Example of Shortest-remaining-time-first or SJF with
Preemption
 Now we add the concepts of varying arrival times and preemption to
the analysis
ProcessAarri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

 Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5


ms
Priority Scheduling
 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest


priority (smallest integer  highest priority)
 Preemptive
 Non-preemptive

 SJF is priority scheduling where priority is the inverse of


predicted next CPU burst time

 Problem  Starvation/Indefinite blocking – low priority


processes may never execute

 Solution  Aging – as time progresses increase the priority


of the process
Round Robin (RR)
 Each process gets a small unit of CPU time (time quantum
q), usually 10-100 milliseconds. After this time has elapsed,
the process is preempted and added to the end of the ready
queue.
 If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits
more than (n-1)q time units.
 Timer interrupts every quantum to schedule next process
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch,
otherwise overhead is too high
Time Quantum and Context Switch Time
Self Study Points
 Multilevel queue scheduling.
 Multilevel Feedback Queue Scheduling

64 6/27/2019

You might also like