0% found this document useful (0 votes)
13 views43 pages

Chapter-2-Processes and Process Management

Chapter Two discusses processes and process management, including definitions, states, and transitions of processes, as well as the concept of threads and inter-process communication. It covers critical topics such as race conditions, mutual exclusion, and various scheduling policies. Additionally, it explores the differences between processes and threads, highlighting their advantages and disadvantages in a computing environment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views43 pages

Chapter-2-Processes and Process Management

Chapter Two discusses processes and process management, including definitions, states, and transitions of processes, as well as the concept of threads and inter-process communication. It covers critical topics such as race conditions, mutual exclusion, and various scheduling policies. Additionally, it explores the differences between processes and threads, highlighting their advantages and disadvantages in a computing environment.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 43

Chapter Two

Processes and process management

By Abebaw S.
Outline
 Process and Thread
 The concept of multi-threading
 Inter process communication
 Race conditioning
 Critical Sections and mutual exclusion
 Process Scheduling
 Preemptive and non preemptive scheduling
 Scheduling policies
 Dead lock
 Deadlock prevention
 Deadlock detection
 Deadlock avoidance
Process concept
 Process can be defined as
 Unit of work

 It is a program in execution.

 It is an instance of running program

 It is an entity that can be assigned to a processor

 A program is a set of instructions to perform a specific task.

 For example, when we write a program in C++ and compile

it, the compiler creates binary code. The original code and
binary code are both programs. When we actually run the
binary code, it becomes a process.
Process (continue . . . )
 A single program can create many processes. For
example, when we open binary file multiple times,
multiple processes are created.
 Difference between program and process

Program Process
 Def: collection of  Def: an instance of an
instructions designed to running program.
accomplish a certain task.  Nature: active entity.
Nature: passive entity.  Lifespan: limited.
 Lifespan: much higher
Process (continue . . . )
Differences (Continue …)
Program Process
 Resource: require  Resource: It requires CPU,
only memory space to memory address, disk,
store the instructions. Input/output during its
 Computation Time: lifetime.
has no computation  Computation Time: Takes
time and cost. a long time to access and
compute a single fact.
Process (continue . . . )
 In main memory, a process has
four sections:
 Text: consists of the current

activity of a process represented


by:
 Value of the Program
Counter
 Contents processor’s
registers.
 Data: This has global and

static variables.
Fig: Process in Memory
Process (continue . . . )

 Stack: Contains temporary data such as function


parameters, return address , and local variables.
 Heap: Dynamically memory allocated to process
during its run time.
Process State (Continue. . .)
 The process, from its creation to completion, passes
through various states.
 The state of the process defined by the current activity of
the process.
 Each process may be in one of the following states:
 New: The process is being created; Or a process which is

going to be picked up by the OS into the main memory.


 Ready: Processes that are in the main memory and are

prepared for execution; but not currently assigned to the


processor.
Process State ( Continue. . .)

Fig : Diagram of Process State


Process State ( Continue. . .)
 Running: Instructions are being executed. In this state
the process access the CPU.
 Waiting: The process is waiting for some event to

occur. Or when the process waits for a certain resource


(Example I/O) the OS moves this process to the
waiting state and assigns the CPU to the other process.
 Terminated: The process has finished execution.
Process State ( Continue. . .)
Other states
 Suspended ready : A process in the ready state, which is
moved to the secondary memory from main memory due to
lack of resource.
 Suspended wait : If the process in waiting state, requires a
resources; but it lacks resources, then the OS removes that
process and put it in the secondary memory.
Process State Transition
 A process transitions
 Are change of states of a process due to an
event.
 They are represented by arrows in a process state

diagram, shows the possible states and transitions of


a process.
 For example, a process can transition from ready to

running when the scheduler selects it for execution,


or from running to waiting when it requests an I/O
operation.
 The Fig in the next slide shows the process state
Process State Transition( Continue. . .)

Fig: Process State Transition


Process State Transition
 Null to new: A new process is created for the implementation of a
process.
 New to ready: When the OS has allocated resources to it and it is
ready to be executed.
 Ready to running : When the CPU becomes available, the OS
selects a process from the ready queue depending on various
scheduling algorithms and moves it to the running state.
 Running to terminated: When a process completes its execution
or is terminated by the OS, it moves to the terminated state.
Process State Transition( Continue. . .)
 Running to ready: This occurs when the running
process has achieved its maximum running time for
continuous execution.
 Running to Wait: If a process requests something
for which it is waiting, it is placed in the waiting state.
 Wait to Ready: A process in the waiting state is
moved to the ready state, when the event for which it
has been waiting occurs.
Process Control Block (PCB)
 Each process is represented in the OS by PCB.
 PCB is a data structure that contains information of the
process related to it.
 PCB also known as a task control block.
 It contains many pieces’ of information associated with
specific process including:
 Process ID: Unique number given to each process by

operating system, used to identifying the process.


 Process state: The state may be new, ready, running,

waiting, and terminated etc.


PCB (Continue …)

 Program counter: indicates the


address of the next instruction that
has to be executed for that particular
process.
 CPU register : The registers that are
being used by a particular process.
There are different kinds of registers
such as index registers, stack
pointers, and general purpose
registers etc.
Fig: PCB
PCB (Continue …)
 CPU scheduling information: It has priority, scheduling
queue pointers (to indicate the order of execution), and
several other scheduling parameters.
 Memory-management information: represents the
memory that are being used by a particular process. It
include information such as the value of the base and limit
registers, the page tables, the segment tables etc.
 I/O Status Information: This information includes the
list of I/O device used by the process, and the list of files
etc.
Threads
 A thread is the subset of a process
 Also known as the lightweight process.
 A process can have more than one thread, and these
threads are managed independently by the scheduler.
 All the threads within one process are interrelated to
each other.
 Threads have some common information, such as data
segment, code segment, files, etc. that is shared to
their peer threads.
 But it contains its own registers, stack, and counter.
Threads (Continue…)

Fig - Single-threaded and multithreaded processes


Threads (Continue…)
Differences
Process Threads
 It is instance of a program  It is a segment of a process or
that is being executed. a lightweight process.
 Independent of each other  Threads are interdependent
and hence don't share a and share memory.
memory or other resources.  The OS takes all the user-
 Each process is treated as a level threads as a single
new by the operating system. process.
 If one process gets blocked  If any user-level thread gets
by the OS, then the other blocked, all of its peer
process can continue the threads also get blocked
execution.
Threads (Continue…)
Process Threads
 Context switching between  Context switching between
two processes takes much time the threads is fast because
as they are heavy. they are very lightweight .
 The data and code segment of  Threads share data and code
each process are independent of segment with their peer
the other. threads;
 OS takes more time to  Threads can be terminated in
terminate a process. very little time.
 New process creation is more  A thread needs less time
time taking as each new process
for creation.
takes all the resources.
Threads
 Advantages
 Minimize the context switching time.
 Provides concurrency within a process.
 Efficient communication.
 Allow utilization of multiprocessor architectures to

a greater scale and efficiency.


Types of Thread
 User Level Threads
Types of Thread (continue …)
 User Level Threads (Continue…)
 Managed by users.

 Kernel is not aware of the existence of threads.


 Multithreading is not compatible
 The thread library contains code for creating and

destroying threads.
 Doesn’t recognized by OS .
 Take less time for context switching.
 Can be created and managed quickly.
 Example: Java thread
Types of Thread (Continue . . .)
 Advantages
 Simple and quick to create.

 Can run on any operating system

 They don’t need system calls to create threads.

 Switching between threads does not need kernel mode

privileges.
 If a thread in the kernel is blocked, it does not block all other

threads in the same process.


 Disadvantages
 Cannot benefit from multiprocessing.

 If a thread performs a blocking operation, the entire process is

halted.
Types of Thread (Continue . . .)
 Kernel level threads (Continue . . .)
 Managed by OS.
 Multithreading is compatible
 Complicated implementation.
 Take more time to context switching
 Hardware support is needed.
 If one kernel thread performs a blocking operation

then another thread can continue execution.


 If a thread in the kernel is blocked, it does not block

all other threads in the same process.


Multithreading
 Multithreading refers to the ability of an OS to support
multiple threads of execution within a single process.
 If a process has multiple thread of control, it can
perform more than one task at a time
 Each threads have their own program counter, stacks
and registers; But they share common code, data and
some OS data structures like files.
Thread Models
 Mapping of user level threads to kernel level threads.
 The user threads maps to kernel threads in one of the
following strategies:
 One to one model
 Many to one model
 Many to Many models
Thread Models (continue …)
 One to one model
 Maps a single user to a single kernel-level thread.
Thread Models (continue …)
 Allows multiple threads to run in parallel on multiprocessors
 Drawbacks: The generation of every new user threads must

include creating a corresponding kernel threads causing an


overhead.
 Many to one
 Maps many user levels threads to one kernel thread.

 Thread management is handled by the thread library in user


space, which is very efficient.
 Facilitates an effective context-switching environment.

 Disadvantage:

‣ If a blocking system call is made, then the entire process


blocks.
Thread Models (continue …)
 Sinceonly one thread can access a kernel at a time, multiple
threads are unable to in parallel on multiprocessors

Fig : Many-to-one model


Thread Models (continue …)
 Many to many model
 Multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
 Combining the best features of the one-to-one and many-
to-one models.
 Users can create any number of threads.
 When a thread performs, a blocking system call, the
kernel can schedule another thread for execution.
 Processes can be split across multiple processors.
Thread Models (continue …)

Fig: Many to many model


Inter process communication (IPC)
 Processes frequently need to communicate with other
processes.
 IPC is the way in which process communicate with
each other.
 The process concurrently executing in OS either
Independent or Cooperating process.
A. Independent process: The execution of one process
does not affect the execution of another process.
B. Cooperating process: The execution of one process
affects the execution of another process.
IPC (continue…)
‣These processes need to be synchronized so that the
order of execution can be guaranteed.
 Reasons for cooperation
 Information sharing: for example several users

interested to access files concurrently.


 Computation speedup: to computational speed up

divide the task into several subtasks. All those subtasks


made to run simultaneously by assigning several
different processors.
IPC (continue…)
 Modularity: Dividing the system in to different
modules and later modules put together to achieve the
single goal. Those modules should communicate or
cooperate to each other.
 Convenience: While running different tasks at the

same time, should be run smoothly without crash each


other.
IPC (continue…)
 There are two fundamental models of IPC
A. Shared memory: The region of memory shared by
cooperating processes. Processes can exchange
information by writing and reading to and from the
shared region.
B. Message passing: Communication takes place by
means of exchange messages between the cooperating
processes.
‣ This is particularly useful for distributed environment,
where the communicating process reside different
computers connected by a network.
IPC (continue…)

Fig: a Shared memory Fig: b Message passing


Race condition
 A race condition is a situation that may occur inside a
critical section.
 This happens when the result of multiple thread
execution in critical section deviate according to the
order in which the threads execute.
 Proper thread synchronization variables can prevent
race conditions.
 This leads to data inconsistency
 The figure in the next slide shows how race condition
can occur.
Race condition (Continue . . .)

Fig: Race condition


Critical Section (CS)
 In multiprogramming, to avoid erroneous behavior the
shared resource need to be protected.
 The protected section is critical section.
 It can’t be executed more than one process at a time.
 the entry sections handles the entry into the critical section.
It acquires the resources needed for execution by the
process.
 The exit section handles the exit from the critical section. It
releases the resources and also informs the other processes
that critical section is free.
 The critical section problem needs a solution to
synchronize the different processes.
Critical Section (Continue…)
 The solution to the critical section problem must satisfy the
following conditions:
 Mutual Exclusion: This implies that only one process can
be inside the critical section at any time. If any other
processes require the critical section, they must wait until it
is free.
 Progress : means that if a process is not using the critical
section, then it should not stop any other process from
accessing it. In other words, any process can enter a critical
section if it is free.
 Bounded Waiting: Means that each process must have a

limited waiting time. It should not wait endlessly to access


the critical section.

You might also like