0% found this document useful (0 votes)
12 views

Module 3 Concurrency

The document discusses concurrency, scheduling, and dispatching in operating systems. It defines concurrency as the execution of multiple instruction sequences simultaneously, which can cause problems like deadlocks. There are two models for concurrent programs: shared memory and message passing. The scheduler decides which process gets CPU time using various algorithms. The dispatcher then performs the context switch, allocating the CPU to the selected process.

Uploaded by

202202214
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Module 3 Concurrency

The document discusses concurrency, scheduling, and dispatching in operating systems. It defines concurrency as the execution of multiple instruction sequences simultaneously, which can cause problems like deadlocks. There are two models for concurrent programs: shared memory and message passing. The scheduler decides which process gets CPU time using various algorithms. The dispatcher then performs the context switch, allocating the CPU to the selected process.

Uploaded by

202202214
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Module:3

Concurrency,
Scheduling
and Dispatch

1
CONCURRENCY:

• Concurrency is the execution of the multiple


instruction sequences at the same time. It
happens in the operating system when there are
several process threads running in parallel.

• Concurrency results in sharing of resources result


in problems like deadlocks and resources
starvation.

• Concurrency means multiple computations are


happening at the same time.
2
TWO MODELS FOR CONCURRENT
PROGRAM:
Shared memory. In the shared
memory model of concurrency,
concurrent modules interact by
reading and writing shared objects
in memory.

3
TWO MODELS FOR CONCURRENT
PROGRAM:
• Other examples of the shared-
memory model:

• A and B might be two processors (or


processor cores) in the same
computer, sharing the same physical
memory.

• A and B might be two programs


running on the same computer,
sharing a common file system with
files they can read and write.

• A and B might be two threads in the


same Java program (we’ll explain
what a thread is below), sharing the
same Java objects.
4
TWO MODELS FOR CONCURRENCT
PROGRAM:

5
TWO MODELS FOR CONCURRENT
PROGRAM:
Message passing. In the
message-passing model,
concurrent modules interact by
sending messages to each other
through a communication channel.
Modules send off messages, and
incoming messages to each
module are queued up for
handling.

6
TWO MODELS FOR CONCURRENT
PROGRAM:
• A and B might be two computers in a
network, communicating by network
connections.

• A and B might be a web browser and


a web server – A opens a connection
to B, asks for a web page, and B
sends the web page data back to A.

• A and B might be an instant


messaging client and server.

• A and B might be two programs


running on the same computer whose
input and output have been
connected by a pipe.
7
TWO MODELS FOR CONCURRENT
PROGRAM:

8
KINDS OF CONCURRENT MODULES:

• Process. A process is an instance of a running program


that is isolated from other processes on the same
machine. In particular, it has its own private section of
the machine’s memory.

• Thread. A thread is a locus of control inside a running


program. It is a place in the program that is being run,
plus the stack of method calls that led to that place to
which it will be necessary to return through.

9
MOTIVATIONS FOR ALLOWING
CONCURRENT EXECUTION:

•Physical resource sharing : Multiuser environment since


hardware resources are limited.

•Logical resource sharing: Shared file(same piece of


information).

•Computation speedup: Parallel execution

•Modularity: Divide system functions into separation


processes.

10
PROBLEMS IN CONCURRENCY:
• Sharing global resources: Sharing of global resources
safely is difficult. If two processes both make use of a
global variable and both perform read and write on that
variable, then the order in which various read and write
are executed is critical.
• Optimal allocation of resources: It is difficult for the
operating system to manage the allocation of resources
optimally.
• Locating programming errors: It is very difficult to
locate a programming error because reports are usually
not reproducible.
• Locking the channel: It may be inefficient for the
operating system to simply lock the channel and
prevents its use by other processes.
11
ADVANTAGES OF CONCURRENCY:
• Running of multiple applications:
It enable to run multiple applications at the same time.
• Better resource utilization:
It enables that the resources that are unused by one
application can be used for other applications.
• Better average response time:
Without concurrency, each application has to be run to
completion before the next one can be run.
• Better performance:
It enables the better performance by the operating
system. When one application uses only the processor
and another application uses only the disk drive then the
time to run both applications concurrently to completion
will be shorter than the time to run each application
consecutively. 12
ISSUES OF CONCURRENCY:
Non-atomic:
Operations that are non-atomic but interruptible by multiple processes
can cause problems.
Race conditions:
A race condition occurs of the outcome depends on which of several
processes gets to a point first.
Blocking:
Processes can block waiting for resources. A process could be
blocked for long period of time waiting for input from a terminal. If the
process is required to periodically update some data, this would be
very undesirable.
Starvation:
It occurs when a process does not obtain service to progress.
Deadlock:
It occurs when two processes are blocked and hence neither can
proceed to execute.
13
SCHEDULER in OPERATING SYSTEM:
• Scheduling is the operating system's process to decide
which process should be allocated to the CPU to
execute several processes.

• Scheduling is the activity of the process manager that


handles the removal of the running process from the
CPU and the selection of another process on the basis
of a particular strategy.

• Schedulers are special system software that handles


process scheduling in various ways. Their main task is to
select the jobs to submit into the system and decide
which process to run.
14
TYPES OF SCHEDULERS:
Long Term Scheduler:

• It is also called a job scheduler.


• A long-term scheduler determines which programs are
admitted to the system for processing. It selects
processes from the queue and loads them into memory
for execution. Process loads into the memory for CPU
scheduling.
• The primary objective of the job scheduler is to provide a
balanced mix of jobs, such as I/O bound and processor
bound. It also controls the degree of multiprogramming.

15
TYPES OF SCHEDULERS:
Short Term Scheduler:

• It is also called a CPU scheduler.


• Its main objective is to increase system performance
under the chosen set of criteria. It is the change of ready
state to running state of the process.
• CPU scheduler selects a process among the processes
that are ready to execute and allocates CPU to one of
them.
• Short-term schedulers, also known as dispatchers,
decide which process to execute next. Short-term
schedulers are faster than long-term schedulers..

16
TYPES OF SCHEDULERS:
Medium Term Scheduler:

• Medium-term scheduling is a part of swapping.


• It removes the processes from memory, and it reduces
the degree of multiprogramming. The medium-term
scheduler is in charge of handling the swapped out
processes.
• A running process may become suspended if it makes
an I/O request, and a suspended process cannot make
any progress towards completion. In this condition, the
suspended process is moved to the secondary storage
to remove the process from memory and make space for
other processes.
17
CATEGORIES IN SCHEDULING:
Medium Term Scheduler:

Non-preemptive: In this case, a process’s resource cannot


be taken before the process has finished running. When a
running process finishes and transitions to a waiting state,
resources are switched.

Preemptive: In this case, the OS assigns resources to a


process for a predetermined period of time. The process
switches from running state to ready state or from waiting
for state to ready state during resource allocation. This
switching happens because the CPU may give other
processes priority and substitute the currently active
process for the higher priority process.
18
DISPATCHER in OPERATING SYSTEM:
• Dispatcher is a special program that comes into play after the
scheduler. When the short term scheduler selects from the ready
queue, the Dispatcher performs the task of allocating the selected
process to the CPU.

• A Dispatcher is basically a module that provides a process with


total control over the CPU (after the short-term scheduler finally
selects it).

• A dispatcher performs various tasks, including context switching,


setting up user registers and memory mapping.

• A running process goes to the waiting state for IO operation etc.,


and then the CPU is allocated to some other process. This switching
of CPU from one process to the other is called context switching.

19
DISPATCHER in OPERATING SYSTEM:
• Dispatcher needs to be as fast as possible, as it is run
on every context switch. The time consumed by the
Dispatcher is known as dispatch latency.

20
DISPATCHER RESPONSIBILITIES:
• Switching to user mode: All of the low-level operating
system processes run on the kernel level security
access, but all application code and user issued
processes run in the application space or the user
permission mode. The Dispatcher switches the
processes to the user mode.

• Addressing: The program counter (PC) register points


towards the next process to be executed. The
Dispatcher is responsible for addressing that address.

21
DISPATCHER RESPONSIBILITIES:
• Initiation of context switch: A context switch is when a
currently running process is halted, and all of its data
and its process control block (PCB) are stored in the
main memory, and another process is loaded in its place
for execution.

• Managing dispatch latency: Dispatch latency is


calculated as the time it takes to stop one process and
start another. The lower the dispatch latency, the more
efficient the software for the same hardware
configuration.

22
EXAMPLE OF SCHEDULER AND
DISPATCHER:

23
DIFFERENCE OF SCHEDULE AND
DISPATCHER:

24
DIFFERENCE OF SCHEDULE AND
DISPATCHER:

25
END OF MODULE

26

You might also like