OS Chapter 2
OS Chapter 2
Operating System
BURIE CAMPUS
DEPARTMENT OF COMPUTER SCIENCE
By:
Amare W.
1
2
It is an active entity
3/2/2018
4 01/27/2025
6. Termination Processes
5 01/27/2025
There may exist more than one process in the system which
Concurrent Processes
There are two types of concurrent processes
True Concurrency (Multiprocessing)
Two or more processes are executed simultaneously in a
multiprocessor environment
Apparent Concurrency (Multiprogramming)
Two or more processes are executed in parallel in a
uniprocessor environment by switching from one process to
another
Supports pseudo parallelism, i.e. the fast switching among
3/2/2018
Process creation
3/2/2018
8 01/27/2025
3/2/2018
9
Process termination
After a process has been created, it starts running and does
whatever its job is.
However, nothing lasts forever, not even processes.
Sooner or later the new process will terminate, usually due to
one of the following conditions:
1. Normal exit (voluntary).
2. Error exit (voluntary).
3. Fatal error (involuntary).
4. Killed by another process (involuntary).
10 01/27/2025
3/2/2018
11 01/27/2025
3/2/2018
12
Process States
During its lifetime, a process passes through a number of
states. The most important states are: New, Ready, Running,
Blocked (waiting) and Terminated.
1. New: A process that has just been created but has not yet
been admitted to the pool of executable processes by the
operating system
Information concerning the process is already maintained in
memory but the code is not loaded and no space has been
allocated for the process
13
….cont’d
….cont’d
5.Exit (Terminated): A process that has been released from the
pool of executable processes by the operating system, either
because it halted or because it aborted for some reason
15 01/27/2025
Context switching
3/2/2018
20 01/27/2025
Threads
A
A Thread is a single point of execution within the process that can be performed. A
thread is a dispatchable unit of work (lightweight process) that has independent
context, state and stack
A process is a collection of one or more threads and associated system resources.
Traditional operating systems are single-threaded system. Modern operating
systems are multithreaded systems
3/2/2018
21 01/27/2025
Multithreading
A process can contain multiple threads. Thread is known as light
weight process. This idea is to achieve parallelism by dividing the
process into multiple threads.
Multithreading is a technique in which a process, executing an
application, is divided into threads that can run concurrently
Handling several independent tasks of an application that do
not need to be serialized (e.g. database servers, web servers).
Having great control over the modularity of the application
and the timing of application related events.
Each thread has independent context, state and stack.
All threads share the same address space and a separate thread
3/2/2018
table
is needed to manage the threads
22 01/27/2025
As each thread has its own independent resource for process execution,
multiple processes can be executed parallel by increasing number of
threads.
3/2/2018
Figure : Single-threaded and multithreaded processes.
23 01/27/2025
Multithreading usage
Several reasons for having multiple threads:
Many applications need multiple activities are going on at once.
decomposing such an application into multiple sequential threads
that run in quasi-parallel, the programming model becomes
simpler.
They are lighter weight than processes, they are easier (i.e., faster) to
create and destroy than processes.
Having multiple threads within an application provide higher
performance argument.
If there is substantial computing and also substantial I/O, having
threads allows these activities to overlap, thus speeding up the
3/2/2018
application.
25 01/27/2025
Multithreading usage
3/2/2018
26 01/27/2025
Types of Threads
3/2/2018
27 01/27/2025
Inter-process communication
Since a processes frequently needs to communicate with other
processes therefore there is a need for well-structured
communication without interrupts among processes.
Inter-process communication is the mechanism provided by
the operating system that allows processes to communicate
with each other.
This communication could involve a process letting another
process know that some event has occurred or transferring of
data from one process to another. 3/2/2018
28 01/27/2025
….cont’d
There are several reasons for providing an environment that allows process
cooperation:
Information Sharing: since several users may be interested in the same
piece of information (for instance shared file) we must provide an
environment to allow concurrent access to such information.
Computation speedup: if we want a particular task to run faster, we must
break it in to subtasks, each of w/h will be executing in parallel with the
others.
Notice that such speedup can be achieved only if the computer has multiple
processing elements (such as CPUs or I/O channels).
3/2/2018
Modularity: if we want to construct in a modular fashion, dividing the
system functions in to separate processes or threads.
30 01/27/2025
….cont’d
Convenience: even an individual user may work on many tasks at the same
time. For instance, editing, printing and compiling in parallel.
There are two fundamental models of interprocess communication:
1. Shared memory: a region of memory that is shared cooperating process
is established. Processes can exchange by reading and writing data to the
shared region.
Ex. Producer-consumer share a common memory. Compiler produces
assembly code to consume by assembler. Client-server computer.
To allow producer and consumer process concurrently we must have
available a buffer of items that can be filled by the producer and emptied by
consumer.
31 01/27/2025
….cont’d
(IPC)
3/2/2018
33 01/27/2025
3/2/2018
35 01/27/2025
Critical Section
Process A reads in and stores the value in a local variable
called next-free-slot.
Just then a clock interrupt occurs and the CPU decides that
process A has run long enough so it switches to process B.
process B also reads in and also gets a 7.
At this instant both processes think that the next available slot
is 7.
3/2/2018
36 01/27/2025
3/2/2018
37 01/27/2025
Mutual exclusion
To avoid race condition we need mutual exclusion. Mutual exclusion
Way of making sure that if one process is using a shared variable or
file, the other processes will be excluded doing the same thing.
However sometimes processes have to access shared memory or files,
or doing other critical things that can lead to races. The part of the
program where the shared memory is accessed is called critical region
r critical section.
3/2/2018
38 01/27/2025
3/2/2018
39 01/27/2025
Processor Scheduling
Scheduling refers to a set of policies and mechanisms to control the
order of work to be performed by a computer system.
Process scheduling is the activity of the process manager that
handles suspension of running process from CPU and selection of
another process on the basis of a particular strategy.
Of all the resources of a computer system that are scheduled before
use, the CPU/processor is the far most important. Process
Scheduling is the means by which operating systems allocate
processor time for processes.
3/2/2018
40 01/27/2025
3/2/2018
42 01/27/2025
Medium-term Scheduling
• The decision to add to the number of processes that are
partially/fully in main memory
• It determines when a program is
brought partially or fully into main
memory so that it may be executed
It is performed when swapping is done
- Ready/Suspend Ready
- Blocked/Suspend Blocked
• It decreases degree of multiprogramming.
Short-term Scheduling: It is also called CPU scheduler.
• The decision of which ready process to execute next
3/2/2018
• It determines which ready process will get processor time next
43 01/27/2025
3/2/2018
44 01/27/2025
Selection function
• It determines which process, among ready processes, is
selected for execution
• It may be based on
- Priority
- Resource requirement
- Execution behavior: time spent in system so far (waiting and
executing), time spent in execution so far, total service time
required by the process
3/2/2018
45 01/27/2025
Scheduling algorithms can be divided into two categories with respect to how
Preemptive
The strategy of allowing processes that are logically runnable to be
temporarily suspended and be moved to the ready state.
Events that may result pre-emption are arrival of new processes, occurrence
of an interrupt that moves blocked process to ready state and clock interrupt.
Suitable for general purpose systems with multiple users. Guarantees
acceptable response time and fairness. Context switching is an overhead
3/2/2018
46 01/27/2025
Non-Preemptive
Run to completion method: once a process is in running state, it
continues to execute until it terminates or blocks itself to wait for
some event.
Simple and easy to implement
Used in early batch systems
It may be well reasonable for some dedicated systems
Efficiency can be attained but response time is very high
3/2/2018
47 01/27/2025
o Arrival Time: Time at which the process arrives in the ready queue.
arrival time.
Scheduling Criteria
o CPU Utilization:
The percentage of times while CPU is busy to the total time
( times CPU busy + times it is idle). Hence, it measures the
benefits from CPU.
To maximize utilization, keep CPU as busy as possible.
CPU utilization range from 40% (for lightly loaded systems) to
90% (for heavily loaded) (Explain why? CPU utilization can
not reach 100%, because of the context switch between active
processes). 3/2/2018
50 01/27/2025
3/2/2018
51 01/27/2025
Scheduling Algorithms
1. First-Come-First-Served Scheduling (FCFSS)
The process that requested the CPU first is allocated the CPU and
keeps it until it released it, either due to completion or request of
an I/O operation.
The process that has been in the ready queue the longest is
selected for running. Its selection function is waiting time and it
uses non preemptive scheduling/decision mode
Process execution begins with CPU burst, followed by an I/O
burst, followed by another CPU burst, then by another 3/2/2018
I/O burst
and so on.
53 01/27/2025
Advantages
It is the simplest of all non-preemptive scheduling algorithms:
process selection & maintenance of the queue is simple
There is a minimum overhead and no starvation
It is often combined with priority scheduling to provide efficiency
Drawbacks
Poor CPU and I/O utilization: CPU will be idle when a process
is blocked for some I/O operation
Poor and unpredictable performance: it depends on the arrival of
processes
Unfair CPU allocation: If a big process is executing, all other
processes will be forced to wait for a long time until the process
3/2/2018
releases the CPU.
It performs much better for long processes than short ones
First Come First Serviced
54 (FCFS) 01/27/2025
algorithm(con’t..)
3/2/2018
55 01/27/2025
The average waiting time under the FCFS policy, however, often is
quite long. The following processes that arrive at time 0 with the
length of the CPU burst given in milliseconds.
3/2/2018
Gant chart:
3/2/2018
Consider the following set of processes, with the length of the CPU
burst time given in milliseconds:
1. Using FCFS
Gant chart:
waiting times and turnaround times for each process are:
3/2/2018
Hence, average waiting time= (0+6+14+21)/4=10.25
milliseconds
59 01/27/2025
2. Using SJF
Gant chart:
The process that has the shortest expected remaining process time. If
a new process arrives with a shorter next CPU burst than what is left
of the currently executing process, the new process gets the CPU.
Its selection function is remaining execution time and uses
preemptive decision mode.
The SJF algorithm can be either preemptive or nonpreemptive. The
choice arises when a new process arrives at the ready queue while a
previous process is still executing.
The next CPU burst of the newly arrived process may 3/2/2018
be shorter
than what is left of the currently executing process.
61 01/27/2025
The following four processes, with the length of CPU burst given in
Process Burst Time Arrival Time
millisecond. P1 7 0
P2 4 2
P3 1 4
P4 4 5
Gant chart
3/2/2018
The average waiting time under the RR policy is often long. The
following set of processes arrive at 0, with the length of the CPU
burst given in millisecond.
RR with Q=4
Gant chart:
3/2/2018
67 01/27/2025
RR with Q=2
Gant chart:
3/2/2018
priority scheduling among the classes but round robin scheduling within
69 01/27/2025
Advantages
It considers the fact that some processes are more important
than others, i.e. it takes external factors into account.
Drawbacks
A high priority process may run indefinitely and it can prevent
all other processes from running.
This creates starvation on other processes. There are two
possible solutions for this problem:
• Assigning a maximum quantum to each process
•Assigning priorities dynamically, i.e. avoid using static
3/2/2018
priorities
70 01/27/2025
The following set of processes, assumed to have the arrived time 0, in the order p1, p2,
….p5 with the length of the CPU burst given in milliseconds.
3/2/2018
Hence, average waiting time= (6+0+16+18+1)/5=8.2
milliseconds
71 01/27/2025
3/2/2018
73 01/27/2025
3/2/2018
74 01/27/2025
Deadlock
For each use of a kernel-managed resource by a process or thread,
the operating system checks to make sure that the process has
requested and has been allocated the resource.
A system table records whether each resource is free or allocated.
For each resource that is allocated, the table also records the process
to which it is allocated.
If a process requests a resource that is currently allocated to another
process, it can be added to a queue of processes waiting for this
resource. Deadlock is a situation where 2 or more processes
3/2/2018
are
waiting for each other.
75 01/27/2025
3/2/2018
76 01/27/2025
Deadlock Characterization
Coffman (1971) identified four necessary conditions that must hold
simultaneously for a deadlock to occur.
Deadlock can arise if four conditions hold simultaneously in a system:
1. Mutual exclusion:- only one process at a time can use a resource
(non-sharable). No process can access a resource unit that has been
allocated to another process
2. Hold and wait: a process holding at least one resource is waiting to
acquire additional resources held by other processes.
In the diagram given below, Process 2 holds Resource 2 and Resource
3 and is requesting the Resource 1 which is held by Process 1.
3/2/2018
77 01/27/2025
3/2/2018
78 01/27/2025
held by P2, …,
Pn–1 is waiting for a resource that is
held by Pn
Pn is waiting for a resource that is 3/2/2018
held by P0
79 01/27/2025
2. Allow the system to enter a deadlock state, detect it and then recover.
(Deadlock Recovery)
3. Ignore the problem and pretend that deadlocks never occur in the
system;
Deadlock Prevention
By ensuring at least one of the necessary conditions for
deadlock will not hold, deadlock can be prevented. This is
mainly done by restraining how requests for resources can be
made
Deadlock prevention methods fall into two classes:
1. An indirect method of deadlock prevention prevents the
occurrence of one of the three necessary conditions listed
previously i.e. Mutual exclusion, Hold and wait and No
preemption 3/2/2018
Deadlock Avoidance
Deadlock avoidance scheme requires each process to declare the
maximum number of resources of each type that it may need in advance.
Having this full information about the sequence of requests and release of
resources, we can know whether or not the system is entering unsafe
state.
The deadlock-avoidance algorithm dynamically examines the resource-
allocation state to ensure that there can never be a circular-wait condition
Resource-allocation state is defined by the number of available and
allocated resources, and the maximum demands of the processes
A state is safe if the system can allocate resources to each process in some
3/2/2018
order (safe sequence) avoiding a deadlock. A deadlock state is an unsafe
state.
82
Thank you