UNIT 2 OS - PPT
UNIT 2 OS - PPT
OPERATING SYSTEMS
UNIT II
CPU SCHEDULING AND THREADS
CPU SCHEDULING AND THREADS
2
CPU SCHEDULING
CPU scheduling is a process which allows one process to use the CPU while the
execution of another process is on hold (in waiting state) due to unavailability of
any resource like I/O etc, thereby making full use of CPU. The aim of CPU
scheduling is to make the system efficient, fast and fair.
3
CPU-I/O Burst Cycle
4
CPU SCHEDULER
• Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed.
• The selection process is carried out by the short-term scheduler (or CPU
scheduler).
• The ready queue is not necessarily a first-in, first-out (FIFO) queue. It may be a
FIFO queue, a priority queue, a tree, or simply an unordered linked list.
5
Preemptive Scheduling
CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state
2. When a process switches from the running state to the ready state
3. When a process switches from the waiting state to the ready state
4. When a process terminates
6
Dispatcher
The dispatcher is the module that gives control of the CPU to the process selected
by the short-term scheduler.
This function involves:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart that program
The time it takes for the dispatcher to stop one process and start another running is
known as the dispatch latency.
7
SCHEDULING CRITERIA
Scheduler may use in attempting to maximize system performance. The scheduling policy
determines the importance of each of the criteria. Some commonly used criteria are:
1. CPU utilization: The CPU should be kept as busy as possible. CPU utilization may
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly
loaded system) to 90 percent (for a heavily used system).
2. Throughput: It is the number of processes completed per time unit. For long processes,
this rate may be 1 process per hour; for short transactions, throughput might be 10
processes per second.
3. Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing
I/O.
4. Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.
5. Response time: It is the amount of time it takes to start responding, but not the time
that it takes to output that response.
8
SCHEDULING ALGORITHMS
9
FIRST-COME, FIRST-SERVED SCHEDULING
• The process that requests the CPU first is allocated the CPU first.
• It is a non-preemptive scheduling technique.
• The implementation of the FCFS policy is easily managed with a FIFO queue.
10
SHORTEST JOB FIRST SCHEDULING
11
PRIORITY SCHEDULING
12
ROUND ROBIN (RR)
• Each process gets a small unit of CPU time (time quantum q), usually 10-
100 milliseconds. After this time has elapsed, the process is preempted
and added to the end of the ready queue.
• If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
• Timer interrupts every quantum to schedule next process
• Performance
• q large FIFO
• q small q must be large with respect to context switch, otherwise overhead is
too high
13
MULTILEVEL QUEUE
15
MULTILEVEL FEEDBACK QUEUE
• A process can move between the various queues; aging can be implemented this
way
• Multilevel-feedback-queue scheduler defined by the following parameters:
• number of queues
• scheduling algorithms for each queue
• method used to determine when to upgrade a process
• method used to determine when to demote a process
• method used to determine which queue a process will enter when that process
needs service
16
THREAD SCHEDULING
17
Pthread Scheduling
18
MULTIPLE-PROCESSOR SCHEDULING
19
Multiple-Processor Scheduling – Load Balancing
20
MULTICORE PROCESSORS
21
Multithreaded Multicore System
22
Real-Time CPU Scheduling
challenges
• Soft real-time systems – no guarantee as to when critical real-time process will
be scheduled
• Hard real-time systems – task must be serviced by its deadline
• Two types of latencies affect performance
1.Interrupt latency – time from arrival of interrupt to start of routine that
services interrupt
2.Dispatch latency – time for schedule to take current process off CPU and
switch to another
23
Conflict phase of dispatch latency:
1.Preemption of any process running in kernel mode
2.Release by low-priority process of resources needed by high-priority processes
24
Deadlock Problem
P0 P1
wait (A); wait(B)
wait (B); wait(A)
25
Bridge Crossing Example
26
System Model
27
Deadlock Characterization
28
Resource-Allocation Graph
• R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
• request edge – directed edge P1 Rj
• assignment edge – directed edge Rj P
29
METHODS FOR HANDLING DEADLOCKS
30
Deadlock Prevention
31
• No Preemption –
• If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held
are released.
• Preempted resources are added to the list of resources for which the process is
waiting.
• Process will be restarted only when it can regain its old resources, as well as
the new ones that it is requesting.
• Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.
32
Deadlock Avoidance
33
Safe State
35
Banker’s Algorithm
• Multiple instances.
• Each process must a priori claim maximum use.
• When a process requests a resource it may have to wait.
• When a process gets all its resources it must return them in a finite
amount of time.
36
Data Structures for the Banker’s Algorithm
38
Advantages of thread
Threads minimize the context switching
time.
Use of threads provides concurrency within a
process.
Efficient communication. 39
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
40
Motivation
42
MULTICORE PROGRAMMING
43
• Types of parallelism
• Data parallelism – distributes subsets of the same data across multiple cores,
same operation on each
• Task parallelism – distributing threads across cores, each thread performing
unique operation
• As # of threads grows, so does architectural support for threading
• CPUs have cores as well as hardware threads
• Consider Oracle SPARC T4 with 8 cores, and 8 hardware threads per core
44
Concurrency vs. Parallelism
45
MULTITHREADING MODELS
Some operating system provide a combined user level thread and kernel level thread
facility. Solaris is a good example of this combined approach. In a combined
system, multiple threads within the same application can run in parallel on multiple
processors and a blocking system call need not block the entire process.
• Multithreading models are three types
• Many to many relationship.
• Many to one relationship.
• One to one relationship.
46
Many-to-One
• Many user-level threads mapped to single kernel thread
• One thread blocking causes all to block
• Multiple threads may not run in parallel on multicore system because only one
may be in kernel at a time
• Few systems currently use this model
• Examples:
• Solaris Green Threads
• GNU Portable Threads
47
One-to-One
• Each user-level thread maps to kernel thread
• Creating a user-level thread creates a kernel thread
• More concurrency than many-to-one
• Number of threads per process sometimes restricted due to overhead
• Examples
• Windows
• Linux
• Solaris 9 and later
48
Many-to-Many Model
• Allows many user level threads to be mapped to many kernel threads
• Allows the operating system to create a sufficient number of kernel threads
• Solaris prior to version 9
• Windows with the ThreadFiber package
49
THREAD MANAGEMENT IN WINDOWS
50
Thread Creation
In Thread Creation, first step is to split the execution thread into two.2 nd step is to
execute both threads concurrently. The creating thread is the parent thread, and the
created thread is a child thread. Any thread, including the main program which is
run as a thread when it starts, can create child threads at any time.
51
Thread termination
In the quicksort example, after both array subsegments are sorted, the threads
created for sorting them terminate. In fact, the thread that creates these two child
threads terminates too, because its assigned task completes. In the merging
example, the threads created to determine the position of array
elements a[i] and b[j] in the merged array terminate once the final positions are
computed.
52
Thread join
• In general, thread join is for a parent to join with one of its child
threads. Thread join has the following activities, assuming that a parent
thread P wants to join with one of its child threads C.
• When P executes a thread join in order to join with C, which is still running, P is
suspended until C terminates. Once C terminates, P resumes.
53
Thread yield
When a thread executes a thread yield, the executing thread is suspended and the
CPU is given to some other runnable thread. This thread will wait until the CPU
becomes available again. Technically, in process scheduler's terminology, the
executing thread is put back into the ready queue of the processor and waits for its
next turn.
54
WINDOWS KERNEL MODE DRIVER INSTALLATION
1. On the host computer, navigate to the Tools folder in your WDK installation and
locate the DevCon tool. For example, look in the following folder:
• C:\Program Files (x86)\Windows Kits\10\Tools\x64\devcon.exe
• Copy the DevCon tool to your remote computer.
2. On the target computer, install the driver by navigating to the folder containing
the driver files, then running the DevCon tool.
a. Here's the general syntax for the devcon tool that will help to install the driver:
devcon install <INF file> <hardware ID>
55
The INF file required for installing this driver is KmdfHelloWorld.inf. The INF file
contains the hardware ID for installing the driver binary, KmdfHelloWorld.sys. Recall
that the hardware ID, located in the INF file, is Root\KmdfHelloWorld.
b. Open a Command Prompt window as Administrator. Navigate to your folder
containing the built driver .sys file and enter this command:
devcon install kmdfhelloworld.inf root\kmdfhelloworld
3. If you get an error message about devcon not being recognized, try adding the path
to the devcon tool. For example, if you copied it to a folder on the target computer
called C:\Tools, then try using the following command:
c:\tools\devcon install kmdfhelloworld.inf root\kmdfhelloworld
4. A dialog box will appear indicating that the test driver is an unsigned driver. Select
install this driver anyway to proceed.
56