0% found this document useful (0 votes)
7 views56 pages

UNIT 2 OS - PPT

Uploaded by

Karthik Vijay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views56 pages

UNIT 2 OS - PPT

Uploaded by

Karthik Vijay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

23AM4403

OPERATING SYSTEMS
UNIT II
CPU SCHEDULING AND THREADS
CPU SCHEDULING AND THREADS

CPU Scheduling- Basic Concepts, Scheduling Criteria, Scheduling Algorithms,


Thread Scheduling, Multiple-Processor Scheduling, Real-Time CPU Scheduling-
Deadlocks- Method of Handling Deadlocks.

Threads– Introduction Types of Threads, Multicore and Multithreading, Thread


Management in Windows-Windows kernel mode driver installation.

2
CPU SCHEDULING

CPU scheduling is a process which allows one process to use the CPU while the
execution of another process is on hold (in waiting state) due to unavailability of
any resource like I/O etc, thereby making full use of CPU. The aim of CPU
scheduling is to make the system efficient, fast and fair.

3
CPU-I/O Burst Cycle

• Maximum CPU utilization obtained with multiprogramming


• CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution
and I/O wait
• CPU burst followed by I/O burst
• CPU burst distribution is of main concern

4
CPU SCHEDULER
• Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed.
• The selection process is carried out by the short-term scheduler (or CPU
scheduler).
• The ready queue is not necessarily a first-in, first-out (FIFO) queue. It may be a
FIFO queue, a priority queue, a tree, or simply an unordered linked list.

5
Preemptive Scheduling
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state
2. When a process switches from the running state to the ready state
3. When a process switches from the waiting state to the ready state
4. When a process terminates

6
Dispatcher
The dispatcher is the module that gives control of the CPU to the process selected
by the short-term scheduler.
This function involves:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart that program
The time it takes for the dispatcher to stop one process and start another running is
known as the dispatch latency.

7
SCHEDULING CRITERIA
Scheduler may use in attempting to maximize system performance. The scheduling policy
determines the importance of each of the criteria. Some commonly used criteria are:
1. CPU utilization: The CPU should be kept as busy as possible. CPU utilization may
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily used system).
2. Throughput: It is the number of processes completed per time unit. For long processes,
this rate may be 1 process per hour; for short transactions, throughput might be 10
processes per second.
3. Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing
I/O.
4. Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.
5. Response time: It is the amount of time it takes to start responding, but not the time
that it takes to output that response.
8
SCHEDULING ALGORITHMS

Scheduling algorithm may be preemptive or nonpreemptive. Types of scheduling


algorithms are given below:
1. First-Come, First-Served Scheduling
2. Shortest Job First Scheduling
3. Priority Scheduling
4. Round Robin Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling

9
FIRST-COME, FIRST-SERVED SCHEDULING

• The process that requests the CPU first is allocated the CPU first.
• It is a non-preemptive scheduling technique.
• The implementation of the FCFS policy is easily managed with a FIFO queue.

10
SHORTEST JOB FIRST SCHEDULING

• It is a non-preemptive scheduling technique.


• The CPU is assigned to the process that has the smallest next CPU burst.
• If two processes have the same length next CPU burst, FCFS scheduling is used to
break the tie.

11
PRIORITY SCHEDULING

• A priority number (integer) is associated with each process


• The CPU is allocated to the process with the highest priority (smallest integer 
highest priority)
• Preemptive
• Nonpreemptive
• SJF is priority scheduling where priority is the inverse of predicted next CPU
burst time
• Problem  Starvation – low priority processes may never execute
• Solution  Aging – as time progresses increase the priority of the process

12
ROUND ROBIN (RR)

• Each process gets a small unit of CPU time (time quantum q), usually 10-
100 milliseconds. After this time has elapsed, the process is preempted
and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
• q large  FIFO
• q small  q must be large with respect to context switch, otherwise overhead is
too high
13
MULTILEVEL QUEUE

• Ready queue is partitioned into separate queues, eg:


• foreground (interactive)
• background (batch)
• Process permanently in a given queue
• Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues:
• Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can schedule amongst
its processes; i.e., 80% to foreground in RR
• 20% to background in FCFS
14
MULTILEVEL QUEUE SCHEDULING

15
MULTILEVEL FEEDBACK QUEUE

• A process can move between the various queues; aging can be implemented this
way
• Multilevel-feedback-queue scheduler defined by the following parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will enter when that process
needs service

16
THREAD SCHEDULING

• Distinction between user-level and kernel-level threads


• When threads supported, threads scheduled, not processes
• Many-to-one and many-to-many models, thread library schedules user-level
threads to run on LWP
• Known as process-contention scope (PCS) since scheduling competition is
within the process
• Typically done via priority set by programmer
• Kernel thread scheduled onto available CPU is system-contention scope (SCS) –
competition among all threads in system

17
Pthread Scheduling

• API allows specifying either PCS or SCS during thread creation


• PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling
• PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling
• Can be limited by OS – Linux and Mac OS X only allow
PTHREAD_SCOPE_SYSTEM

18
MULTIPLE-PROCESSOR SCHEDULING

• CPU scheduling more complex when multiple CPUs are available


• Homogeneous processors within a multiprocessor
• Asymmetric multiprocessing – only one processor accesses the system data
structures, alleviating the need for data sharing
• Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes
in common ready queue, or each has its own private queue of ready processes
• Currently, most common
• Processor affinity – process has affinity for processor on which it is currently running
• soft affinity
• hard affinity
• Variations including processor sets

19
Multiple-Processor Scheduling – Load Balancing

• If SMP, need to keep all CPUs loaded for efficiency


• Load balancing attempts to keep workload evenly distributed
• Push migration – periodic task checks load on each processor, and if found
pushes task from overloaded CPU to other CPUs
• Pull migration – idle processors pulls waiting task from busy processor

20
MULTICORE PROCESSORS

• Recent trend to place multiple processor cores on same physical chip


• Faster and consumes less power
• Multiple threads per core also growing
• Takes advantage of memory stall to make progress on another thread while
memory retrieve happens

21
Multithreaded Multicore System

22
Real-Time CPU Scheduling
challenges
• Soft real-time systems – no guarantee as to when critical real-time process will
be scheduled
• Hard real-time systems – task must be serviced by its deadline
• Two types of latencies affect performance
1.Interrupt latency – time from arrival of interrupt to start of routine that
services interrupt
2.Dispatch latency – time for schedule to take current process off CPU and
switch to another

23
Conflict phase of dispatch latency:
1.Preemption of any process running in kernel mode
2.Release by low-priority process of resources needed by high-priority processes

24
Deadlock Problem

• A set of blocked processes each holding a resource and waiting to acquire a


resource held by another process in the set.
• Example
• System has 2 tape drives.
• P1 and P2 each hold one tape drive and each needs another one.
• Example
• semaphores A and B, initialized to 1

P0 P1
wait (A); wait(B)
wait (B); wait(A)

25
Bridge Crossing Example

• Traffic only in one direction.


• Each section of a bridge can be viewed as a resource.
• If a deadlock occurs, it can be resolved if one car backs up (preempt resources and
rollback).
• Several cars may have to be backed upif a deadlock occurs.
• Starvation is possible.

26
System Model

• Resource types R1, R2, . . ., Rm


CPU cycles, memory space, I/O devices
• Each resource type Ri has Wi instances.
• Each process utilizes a resource as follows:
• request
• use
• release

27
Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously.


• Mutual exclusion: only one process at a time can use a resource.
• Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.
• No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task.
• Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0
is waiting for a resource that is held by
P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource
that is held by Pn, and P0 is waiting for a resource that is held by P0.

28
Resource-Allocation Graph

A set of vertices V and a set of edges E.


• V is partitioned into two types:
• P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.

• R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
• request edge – directed edge P1  Rj
• assignment edge – directed edge Rj  P

29
METHODS FOR HANDLING DEADLOCKS

• Ensure that the system will never enter a deadlock state.


• Allow the system to enter a deadlock state and then recover.
• Ignore the problem and pretend that deadlocks never occur in the system; used by
most operating systems, including UNIX.

30
Deadlock Prevention

Restrain the ways request can be made.


• Mutual Exclusion – not required for sharable resources; must hold for nonsharable
resources.
• Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources.
• Require process to request and be allocated all its resources before it begins
execution, or allow process to request resources only when the process has
none.
• Low resource utilization; starvation possible.

31
• No Preemption –
• If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held
are released.
• Preempted resources are added to the list of resources for which the process is
waiting.
• Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting.
• Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.

32
Deadlock Avoidance

Requires that the system has some additional a priori information


available.
• Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
• The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition.
• Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes.

33
Safe State

• When a process requests an available resource, system must decide if immediate


allocation leaves the system in a safe state.
• System is in safe state if there exists a safe sequence of all processes.
• Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still
request can be satisfied by currently available resources + resources held by all
the Pj, with j<I.
• If Pi resource needs are not immediately available, then Pi can wait until all Pj
have finished.
• When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate.
• When Pi terminates, Pi+1 can obtain its needed resources, and so on.
34
Safe, unsafe , deadlock state spaces

35
Banker’s Algorithm

• Multiple instances.
• Each process must a priori claim maximum use.
• When a process requests a resource it may have to wait.
• When a process gets all its resources it must return them in a finite
amount of time.

36
Data Structures for the Banker’s Algorithm

Let n = number of processes, and m = number of resources types.


• Available: Vector of length m. If available [j] = k, there are k instances of resource
type Rj available.
• Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances
of resource type Rj.
• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k
instances of Rj.
• Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to
complete its task.

Need [i,j] = Max[i,j] – Allocation [i,j].


37
THREADS

A thread is the basic unit of CPU utilization. It is sometimes called as a


lightweight process. It consists of a thread ID, a program counter, a register set and
a stack. It shares with other threads belonging to the same process its code section,
data section, and resources such as open files and signals. A traditional or heavy
weight process has a single thread of control. If the process has multiple threads of
control, it can do more than one task at a time.

38
Advantages of thread
 Threads minimize the context switching
time.
 Use of threads provides concurrency within a
process.
 Efficient communication. 39
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.

40
Motivation

• Most modern applications are multithreaded


• Threads run within application
• Multiple tasks with the application can be implemented by separate threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
• Process creation is heavy-weight while thread creation is light-weight
• Can simplify code, increase efficiency
• Kernels are generally multithreaded
41
Multithreaded Server Architecture

42
MULTICORE PROGRAMMING

• Multicore or multiprocessor systems putting pressure on programmers,


challenges include:
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
• Parallelism implies a system can perform more than one task simultaneously
• Concurrency supports more than one task making progress
• Single processor / core, scheduler providing concurrency

43
• Types of parallelism
• Data parallelism – distributes subsets of the same data across multiple cores,
same operation on each
• Task parallelism – distributing threads across cores, each thread performing
unique operation
• As # of threads grows, so does architectural support for threading
• CPUs have cores as well as hardware threads
• Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per core

44
Concurrency vs. Parallelism

Concurrent execution on single-core system:

Parallelism on a multi-core system:

45
MULTITHREADING MODELS

Some operating system provide a combined user level thread and kernel level thread
facility. Solaris is a good example of this combined approach. In a combined
system, multiple threads within the same application can run in parallel on multiple
processors and a blocking system call need not block the entire process.
• Multithreading models are three types
• Many to many relationship.
• Many to one relationship.
• One to one relationship.

46
Many-to-One
• Many user-level threads mapped to single kernel thread
• One thread blocking causes all to block
• Multiple threads may not run in parallel on multicore system because only one
may be in kernel at a time
• Few systems currently use this model
• Examples:
• Solaris Green Threads
• GNU Portable Threads

47
One-to-One
• Each user-level thread maps to kernel thread
• Creating a user-level thread creates a kernel thread
• More concurrency than many-to-one
• Number of threads per process sometimes restricted due to overhead
• Examples
• Windows
• Linux
• Solaris 9 and later

48
Many-to-Many Model
• Allows many user level threads to be mapped to many kernel threads
• Allows the operating system to create a sufficient number of kernel threads
• Solaris prior to version 9
• Windows with the ThreadFiber package

49
THREAD MANAGEMENT IN WINDOWS

There are four basic thread management operations:


• Thread creation
• Thread termination
• Thread join
• Thread yield

50
Thread Creation

In Thread Creation, first step is to split the execution thread into two.2 nd step is to
execute both threads concurrently. The creating thread is the parent thread, and the
created thread is a child thread. Any thread, including the main program which is
run as a thread when it starts, can create child threads at any time.

51
Thread termination

In the quicksort example, after both array subsegments are sorted, the threads
created for sorting them terminate. In fact, the thread that creates these two child
threads terminates too, because its assigned task completes. In the merging
example, the threads created to determine the position of array
elements a[i] and b[j] in the merged array terminate once the final positions are
computed.

52
Thread join

• In general, thread join is for a parent to join with one of its child
threads. Thread join has the following activities, assuming that a parent
thread P wants to join with one of its child threads C.

• When P executes a thread join in order to join with C, which is still running, P is
suspended until C terminates. Once C terminates, P resumes.

• When P executes a thread join and C has already terminated, P continues as if no


such thread join has ever executed (i.e., join has no effect).

53
Thread yield

When a thread executes a thread yield, the executing thread is suspended and the
CPU is given to some other runnable thread. This thread will wait until the CPU
becomes available again. Technically, in process scheduler's terminology, the
executing thread is put back into the ready queue of the processor and waits for its
next turn.

54
WINDOWS KERNEL MODE DRIVER INSTALLATION

1. On the host computer, navigate to the Tools folder in your WDK installation and
locate the DevCon tool. For example, look in the following folder:
• C:\Program Files (x86)\Windows Kits\10\Tools\x64\devcon.exe
• Copy the DevCon tool to your remote computer.
2. On the target computer, install the driver by navigating to the folder containing
the driver files, then running the DevCon tool.
a. Here's the general syntax for the devcon tool that will help to install the driver:
devcon install <INF file> <hardware ID>

55
The INF file required for installing this driver is KmdfHelloWorld.inf. The INF file
contains the hardware ID for installing the driver binary, KmdfHelloWorld.sys. Recall
that the hardware ID, located in the INF file, is Root\KmdfHelloWorld.
b. Open a Command Prompt window as Administrator. Navigate to your folder
containing the built driver .sys file and enter this command:
devcon install kmdfhelloworld.inf root\kmdfhelloworld
3. If you get an error message about devcon not being recognized, try adding the path
to the devcon tool. For example, if you copied it to a folder on the target computer
called C:\Tools, then try using the following command:
c:\tools\devcon install kmdfhelloworld.inf root\kmdfhelloworld
4. A dialog box will appear indicating that the test driver is an unsigned driver. Select
install this driver anyway to proceed.
56

You might also like