0% found this document useful (0 votes)
12 views138 pages

Unit 2

Uploaded by

Neha.Kale K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views138 pages

Unit 2

Uploaded by

Neha.Kale K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 138

A Lecture on

Process and Process Scheduling

Dr. Sanjay M. Patil


Professor & Head
Department of AIDS
[email protected]
Datta Meghe College of Engineering, Airoli
Process and Process Scheduling
Process and Process Scheduling

An important aspect of multiprogramming is scheduling. The


resources that are scheduled are I/O and processors.

The goal is to achieve


▪ High processor utilization: The OS must interleave the
execution of multiple processes, to maximize processor
utilization.
▪ High throughput: Number of processes completed per unit
time
▪ Low response time: Time elapse from the submission of a
request to the beginning of the response.
Process and Process Scheduling

▪ The OS may be required to support interprocess


communication and user creation of processes.

▪ The OS must allocate resources to processes in conformance


with a specific policy (e.g., certain functions or applications
are of higher priority).
WHAT IS A PROCESS?
Definitions
•A program in execution.
•An instance of a program running on a computer.
•The entity that can be assigned to and executed on a
processor.
•A unit of activity characterized by the execution of a sequence
of instructions, a current state, and an associated set of system
resources.

A process is an entity that consists of a number of elements. Two


essential elements of a process are program code and a set of
data associated with that code.

Let us suppose the processor begins to execute this program


code, and we refer to this executing entity as a process.
PROCESS STATES

• For a program to be executed, a process, or task, is created for


that program.

• Process state shows the current activity of the process.

• Process executes instructions from its repertoire in some


sequence dictated by the changing values in the program
counter register.
• From the point of view of an individual program, its execution
involves a sequence of instructions within that program.

• The behavior of an individual process is characterized by


listing the sequence of instructions that execute for that
process referred to as a trace of the process.
PROCESS STATES
Memory layout of three processes
• There is a small dispatcher program that switches the
processor from one process to another.
PROCESS STATES

• Figure shows the traces of each of the processes during the


early part of their execution. The first 12 instructions executed
in processes A and C are shown.
• Process B executes four instructions, and we assume the fourth
instruction invokes an I/O operation for which the process must
wait.
A Two-State Process Model

• In this model, a process may be in one of the two states:


Running or Not Running.

• When the OS creates a new process, it creates a process control


block for the process and enters that process into the system in
the Not Running state.

• The process exists, is known to the OS, and is waiting for an


opportunity to execute.
A Two-State Process Model

• From time to time, the currently running process will be


interrupted, and the dispatcher portion of the OS will select
some other process to run.

• The former process moves from the Running state to the Not
Running state, and one of the other processes moves to the
Running state.
A Two-State Process Model

• Figure b shows a structure, where there is a single queue in


which each entry is a pointer to the process control block of a
particular process.
• A process that is interrupted is transferred to the queue of
waiting processes.
• Alternatively, if the process has completed or aborted, it is
discarded (exits the system).
• In either case, the dispatcher takes another process from the
queue to execute.
The Creation and Termination of
Processes
Process Creation:
• When a new process is to be added to those currently being
managed, the OS builds the data structures used to manage the
process, and allocates address space in main memory to the
process.
• These actions constitute the creation of a new process.
Four common events lead to the creation of a process, as
indicated in table.
The Creation and Termination of
Processes
• In a batch environment, a process is created in response to the
submission of a job.
• In an interactive environment, a process is created when a new
user attempts to log on.
• In both cases, the OS is responsible for the creation of the new
process.
• An OS may also create a process on behalf of an application. For
example, if a user requests that a file be printed, the OS can
create a process that will manage the printing.
• When the OS creates a process at the explicit request of
another process, the action is referred to as process spawning.
• When one process spawns another, the former is referred
to as the parent process, and the spawned process is
referred to as the child process.
The Creation and Termination of
Processes

Process Termination

• Any computer system must provide a means for a process to


indicate its completion (Table).

• A batch job includes a Halt instruction or an explicit OS service


call for termination.

• In the former case, the Halt instruction will generate an


interrupt to alert the OS that a process has completed.

• For an interactive application, the action of the user will indicate


when the process is completed.
Reasons for Process Termination
A Five-State Model

• If all processes were always ready to execute, then the queuing


discipline would be effective.
• The queue is a first-in-first-out list and the processor operates in
round-robin fashion on the available processes (each process in
the queue is given a certain amount of time, in turn, to execute
and then returned to the queue, unless blocked).
• Above implementation is inadequate:
• Some processes in the Not Running state are ready to
execute, while others are blocked, waiting for an I/O
operation to complete.
• Using a single queue, the dispatcher can not just select the
process at the oldest end of the queue.
• The dispatcher has to scan the list looking for the process
that is not blocked and that has been in the queue the
longest.
A Five-State Model

Here we split the Not Running state into two states:


Ready and Blocked.

The five states in this new diagram are as follows:


1. Running: The process that is currently being executed.
A Five-State Model

2. Ready: A process that is prepared to execute when given the


opportunity.

3. Blocked/Waiting: A process that cannot execute until some


event occurs, such as the completion of an I/O operation.

4. New: A process that has just been created but has not yet been
admitted to the pool of executable processes by the OS.

5. Exit/Terminated: A process that has been released from the


pool of executable processes by the OS, either because it halted or
because it aborted for some reason.
A Five-State Model

• When the process is created, it remains in the new state. After


the process admitted for execution, it goes in ready state. A
process in this state, wait in the ready queue. Schedule
dispatches the ready process for execution. i.e. CPU is now
allocated to the process.

• When CPU is executing the process, it is in executing state.


After context switch, process goes from executing to ready
state.
CONTEXT SWITCH
CONTEXT SWITCH
A Five-State Model

• If executing process initiates an I/O operation before it allowed


allotted time expires, the executing process voluntarily give up
the CPU.

• In this case process transit from executing to waiting state.


When the external event for which a process was waiting
happens, process transit from switching to ready state. When
process finishes the execution, it transit to terminated state.
Process State
PROCESS DESCRIPTION

• The OS controls events within the computer system.


• It schedules and dispatches processes for execution by the
processor, allocates resources to processes, and responds to
requests by user processes for basic services.
• OS is that entity that manages the use of system resources by
processes.
• In a multiprogramming environment, there are a number of
processes (P1 , ….., Pn) that have been created and exist in
virtual memory.
• Each process, during the course of its execution, needs access
to certain system resources, including the processor, I/O
devices, and main memory.
PROCESS DESCRIPTION

• In the figure, process P1 is running; at least part of the process


is in main memory, and it has control of two I/O devices.
• Process P2 is also in main memory, but is blocked waiting for
an I/O device allocated to P1.
• Process Pn has been swapped out and is therefore suspended.
PROCESS DESCRIPTION

What information does the OS need to control processes and


manage resources for them?

Operating System Control Structures


• OS must have information about the current status of each
process and resource.
• The OS constructs and maintains tables of information about
each entity that it is managing.
• Figure shows four different types of tables maintained by the
OS: memory, I/O, file, and process.
• All operating systems maintain information in these four
categories.
Structure of OS Control Tables
PROCESS DESCRIPTION

Memory tables are used to keep track of both main (real) and
secondary (virtual) memory. Some of main memory is reserved for
use by the OS; the remainder is available for use by processes.

The memory tables must include the following information:

• The allocation of main memory to processes


• The allocation of secondary memory to processes
• Any protection attributes of blocks of main or virtual memory,
such as which processes may access certain shared memory
regions
• Any information needed to manage virtual memory
PROCESS DESCRIPTION

I/O tables are used by the OS to manage the I/O devices and
channels of the computer system.
• At any given time, an I/O device may be available or assigned
to a particular process.
• If an I/O operation is in progress, the OS needs to know the
status of the I/O operation and the location in main memory
being used as the source or destination of the I/O transfer.

File tables.
• These tables provide information about the existence of files,
their location on secondary memory, their current status, and
other attributes.

Process tables manage processes.


PROCESS DESCRIPTION

Process Control Structures


OS must know the following if it is to manage and control a
process.
• First, it must know where the process is located;
• second, it must know the attributes of the process that are
necessary for its management (e.g., process ID and process
state).

Process Location
• A process must include a program or set of programs to be
executed.
• These programs have a set of data locations for local and
global variables and any defined constants.
• A process must consist of at least sufficient memory to hold
the programs and data of that process.
PROCESS DESCRIPTION

• The execution of a program involves a stack that is used to


keep track of procedure calls and parameter passing between
procedures.
• Each process has associated with it a number of attributes that
are used by the OS for process control.
• The collection of attributes is referred to as a process control
block. The collection of program, data, stack, and attributes is
called as process image (Table ).
PROCESS DESCRIPTION

Process Attributes

A multiprogramming system information about each process


resides in a process control block.

The process control block information is grouped into three


general categories:
1. Process identification
2. Processor state information
3. Process control information
PROCESS DESCRIPTION

1. Process identification:
Each process is assigned a unique numeric identifier, which may
simply be an index into the primary process table.
2. Processor state information:
• It consists of the contents of processor registers. While a
process is running the information is in the registers.
• When a process is interrupted, all of this register information is
saved so it can be restored when the process resumes
execution.
• All processor designs include a register or set of registers,
known as the program status word (PSW), that contains
status information.
3. Process control information:
This is the additional information needed by the OS to control
and coordinate the various active processes.
PROCESS CONTROL BLOCK

• An process is identified by its Process Control Block (PCB).


• PCB is a data structure used by OS to keep track on the
process.
• When a process is interrupted, the current values of the
program counter and the processor registers (context data)
are saved in the appropriate fields of the corresponding
process control block, and the state of the process is changed
to some other value, such as blocked or ready .
• The OS is now free to put some other process in the running
state.
PROCESS CONTROL BLOCK

▪ Identifier: A unique identifier associated with


this process, to distinguish it from all other
processes.
▪ State: If the process is currently executing, it is
in the running state.
▪ Priority: Priority level relative to other
processes.
▪ Program counter: The address of the next
instruction in the program to be executed.
▪ Memory pointers: Include pointers to the
program code and data associated with this
process, plus any memory blocks shared with
other processes.
PROCESS CONTROL BLOCK

▪ Context data: These are data that are present in registers in


the processor while the process is executing.

▪ I/O status information: Includes outstanding I/O requests, I/O


devices assigned to this process, a list of files in use by the
process, and so on.

▪ Accounting information: May include the amount of


processor time and clock time used, time limits, account
numbers, and so on.
UNIPROCESSOR SCHEDULING
Uniprocessor Scheduling

• In a multiprogramming system, multiple processes exist


concurrently in main memory.

• Each process alternates between using a processor and waiting


for some event to occur, such as the completion of an I/O
operation.

• The processor or processors are kept busy by executing one


process while the others processes wait.

• The key to multiprogramming is scheduling.


Scheduling Queues

• The processes that are entering into the system are stored in
the Job Queue.

• If the processes are in the Ready state are generally placed in


the Ready Queue.

• The processes waiting for a device are placed in Device Queues.


There are unique device queues which are available for every
I/O device.

• First place a new process in the Ready queue and then it waits
in the ready queue till it is selected for execution.
Scheduling Queues

• Once the process is assigned to the CPU and is executing, any


one of the following events occur −

1. The process issue an I/O request, and then placed in the I/O
queue.
2. The process may create a new sub process and wait for
termination.
3. The process may be removed forcibly from the CPU, which
is an interrupt, and it is put back in the ready queue.

In the first two cases, the process switches from the waiting
state to the ready state, and then puts it back in the ready
queue.
Queuing Diagram representation of
Process Scheduling

• A process continues this cycle till it terminates, at which time it


is removed from all queues and has its PCB and resources
deallocated.
TYPES OF PROCESSOR SCHEDULING

• The aim of processor scheduling is to assign processes to be


executed by the processor or processors over time, in a way
that meets system objectives, such as response time,
throughput, and processor efficiency.
• In many systems, this scheduling activity is broken down into
three separate functions: long-, medium-, and short-term
scheduling.
• The names suggest the relative time scales with which these
functions are performed.
Scheduling and Process State
Transitions
Figure relates the scheduling functions to the process state
transition diagram.
TYPES OF PROCESSOR SCHEDULING

• Long-term scheduling is performed when a new process is


created. This is a decision whether to add a new process to the
set of processes that are currently active.

• Medium-term scheduling is a part of the swapping function.


This is a decision whether to add a process to those that are at
least partially in main memory and therefore available for
execution.

• Short-term scheduling is the actual decision of which ready


process to execute next.

❖ Scheduling affects the performance of the system because it


determines which processes will wait, and which will progress.
TYPES OF PROCESSOR SCHEDULING

Long-term scheduling

• When programs are submitted to the system for the purpose of


processing, long term scheduler comes to know about it.
• Its jobs is to choose process from the queue and place them
into main memory for execution purpose.
• CPU bound processes require more CPU time and less I/O time
until execution completes.
• On the contrary, I/O bound processes use up more time in doing
I/O and require less CPU time for computation.
• Job of the long term scheduler is to provide a balanced mix of
I/O bound and CPU bound jobs.
TYPES OF PROCESSOR SCHEDULING

• The number of processes in memory for execution and degree


of multiprogramming are related to each other. More number
of processes in memory for execution indicates degree of
multiprogramming is high.
• Long term Scheduler controls the degree of
multiprogramming.
• If the average rate of new process creation and average
departure rate of processes leaving the system is equal, then
degree of programming is steady.
• Long term scheduler is not present in time sharing operating
system.
• When process state transition takes place from new to ready
then their long term scheduler come into picture for scheduling
purpose.
Medium-Term Scheduling

Medium-Term Scheduling

• Medium-term scheduling is part of the swapping function.


• If the degree of multiprogramming increases, Medium term
scheduler swap out the process from main memory.
• The swapped out processes again swapped in by medium turn
term scheduler.
• This is done to control the degree of multi programming or to
free up a memory.
• This is also helpful to balance the mix of different processes,
some time sharing operating systems have these additional
scheduler.
Short-Term Scheduling

Short-Term Scheduling
• Processes which are in ready queue wait for CPU. A short term
scheduler chooses the process from ready queue and assigns
it to the CPU based on some policy.
• These policies can be First Come First Served (FCFS), Short Job
first (SJF), Priority based and Round Robin etc. Main objective
is increasing system performance by keeping the CPU busy.
• It is the transition of the process from Ready State to running
state. Actual allocation of process to CPU is done by
dispatcher.
• Short term scheduler is faster than long term scheduler and
should be invoked more frequently compared to long term
scheduler.
COMPARISON OF SCHEDULERS
TYPES OF SCHEDULING
There are two general categories:

• Non-preemptive:
In this case, once a process is in the Running state, it continues
to execute until
(a) it terminates or
(b) it blocks itself to wait for I/O or to request some OS service.

• Preemptive:
• The currently running process may be interrupted and
moved to the Ready state by the OS, i.e. control of GPU can
be taken from running process..
• The decision to preempt may be performed when a new
process arrives, when an interrupt occurs that places a
blocked process in the Ready state, or periodically, based on
a clock interrupt.
Preemptive vs. Non-preemptive

• Preemptive policies incur greater overhead than non-


preemptive ones, but may provide better service to the total
population of processes because they prevent any one
process from monopolizing the processor for very long.

• The cost of preemption may be kept relatively low by using


efficient process-switching mechanisms and by providing a
large main memory to keep a high percentage of programs in
main memory.
Scheduling Criteria
Different CPU scheduling algorithms have different properties
and the choice of a particular algorithm depends on the various
factors.
Many criteria have been suggested for comparing CPU
scheduling algorithms.
Scheduling Criteria
Scheduling Criteria
Scheduling Criteria
SCHEDULING ALGORITHMS

• CPU scheduling treats with the issues of deciding which of the


processes in the ready queue needs to be allocated to the CPU.
• There are several different CPU scheduling algorithms used
within an operating system.

1. First-Come, First-Served Scheduling


2. Shortest-Job-First Scheduling
3. Priority Scheduling
4. Round-Robin Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling
CPU I/O Burst Cycles
CPU I/O Burst Cycles
CPU I/O Burst Cycles
Scheduling Algorithms:
First-Come, First-Served (FCFS)
Scheduling Algorithms: First-Come,
First-Served (FCFS)

Example 1: Three processes arrive in order P1, P2, P3.


P1 burst time: 24
P2 burst time: 3
P3 burst time: 3

Draw the Gantt Chart and compute Average Waiting Time and
Average Completion Time.
Scheduling Algorithms: First-Come,
First-Served (FCFS)
* Example: Three processes arrive in order P1, P2, P3.
* P1 burst time: 24
* P2 burst time: 3
* P3 burst time: 3
* Waiting Time
* P1: 0 P1 P2 P3
* P2: 24 0 24 27 30
* P3: 27
* Completion Time:
* P1: 24
* P2: 27
* P3: 30
* Average Waiting Time: (0+24+27)/3 = 17
* Average Completion Time: (24+27+30)/3 = 27
Scheduling Algorithms: First-Come,
First-Served (FCFS)

* What if their order had been P2, P3, P1?


P1 burst time: 24
P2 burst time: 3
P3 burst time: 3
Scheduling Algorithms: First-Come,
First-Served (FCFS)
* What if their order had been P2, P3, P1?
* P1 burst time: 24
* P2 burst time: 3
* P3 burst time: 3
P2 P3 P1
* Waiting Time
* P2: 0 0 3 6 30
* P3: 3
* P1: 6
* Completion Time:
* P2: 3
* P3: 6
* P1: 30
* Average Waiting Time: (0+3+6)/3 = 3 (compared to 17)
* Average Completion Time: (3+6+30)/3 = 13 (compared to 27)
Scheduling Algorithms: First-Come,
First-Served (FCFS)

FIFO Pros and Cons:

1. Simple (+)
2. Short jobs get stuck behind long ones (-)
* If all you’re buying is milk, doesn’t it always seem like you
are stuck behind a cart full of many items
3. Performance is highly dependent on the order in which
jobs arrive (-)
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Shortest-Job-First Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
SCHEDULING ALGORITHMS
Priority Scheduling
Priority Scheduling(Summary)
* A priority number (integer) is associated with each process
* The CPU is allocated to the process with the highest priority
(smallest integer ≡ highest priority)
* Preemptive (if a higher priority process enters, it receives the
CPU immediately)
* Non-preemptive (higher priority processes must wait until
the current process finishes; then, the highest priority ready
process is selected)
* SJF is a priority scheduling where priority is the predicted next
CPU burst time
* Problem ≡ Starvation – low priority processes may never
execute
* Solution ≡ Aging – as time progresses increase the priority of
the process
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3

* The Gantt chart is:


P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3 P1 P2 P3 P1 P1 P1 P1 P1
P3 3 0 4 7 10 14 18 22 26 30

* Waiting Time:
* P1: (10-4) = 6
* P2: (4-0) = 4
* P3: (7-0) = 7
* Completion Time:
* P1: 30
* P2: 7
* P3: 10
* Average Waiting Time: (6 + 4 + 7)/3= 5.67
* Average Completion Time: (30+7+10)/3=15.67
Example of RR with Time Quantum = 4
Example of RR with Time Quantum = 20

* Waiting Time: A process can finish before the time quantum expires, and release the CPU.

* P1: (68-20)+(112-88) = 72
* P2: (20-0) = 20
* P3: (28-0)+(88-48)+(125-108) = 85
* P4: (48-0)+(108-68) = 88
* Completion Time:
* P1: 125
* P2: 28
* P3: 153
* P4: 112
* Average Waiting Time: (72+20+85+88)/4 = 66.25
* Average Completion Time: (125+28+153+112)/4 = 104.5
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Round Robin (RR) Scheduling
Threads
Threads
Single and Multithreaded Processes
Threads
Threads
Threads
Threads
Multithreading Models
Threads
User Level Threads

* Thread management done by user-level threads library rather


than via systems calls.
* Thread switching does not need to call operating system and
to cause interrupt to the kernel.
* The kernel knows nothing about user-level threads and
manages them as if they were single-threaded processes.
* In a user level implementation all of the work of thread
management is done by the thread package.
* Thread management includes creation and termination of
thread messages and data passing between the threads,
scheduling thread for execution, thread synchronization and
after context switch saving and restoring thread etc.
User Level Threads
* User level threads requires extremely low overhead and can
achieve high computational performance .
User Level Threads
* Advantages :
* A user-level threads package can be implemented on an
Operating System that does not support threads.
* User-level threads do not require any modification to OS.
* Simple Representation : Each thread is represented simply by
a PC, registers, stack and a small control block, all stored in
the user process address space.
* Simple Management : This simply means that creating a
thread, switching between threads and synchronization
between threads can all be done without intervention of the
kernel.
* Fast and Efficient : Thread switching is not more expensive
than a procedure call.
User Level Threads

Disadvantages :
• There is a lack of coordination between threads and operating
system kernel.

* The applications where after blocking of one thread other requires


to run in parallel, user level threads.

* If one thread is blocked on I/O, the entire process gets blocked.


Kernel Level Threads
* The kernel knows about the threads and manages them.
* The kernel has a thread table that keeps track of all the threads
in the system.
* The kernel maintains the traditional process table to keep track
of the processes.
Advantages:
* Because kernel has full knowledge of all threads, Scheduler
may decide to give more time to a process having large
number of threads than process having small number of
threads.
* Kernel-level threads are especially good for applications that
frequently block.
Kernel Level Threads

Disadvantages:
* The kernel-level threads are slow and inefficient. For
instance, threads operations are hundred of times slower
than that of user-level threads.
* Since kernel must manage and schedule threads as well as
processes. It requires a full thread control block (TCB) for
each thread to maintain information about threads.
Multithreading Models

* Many-to-One

* One-to-One

* Many-to-Many
Many-to-One

* Many user-level threads mapped


to single kernel thread
* One thread blocking causes all
to block
* Multiple threads may not run in
parallel on muticore system
because only one may be in
kernel at a time
* Few systems currently use this
model
Many-to-One
One-to-One

* Each user-level thread maps to


kernel thread.
* Creating a user-level thread
creates a kernel thread.
* More concurrency than many-
to-one by allowing another
thread to run when a thread
makes a blocking system call.
* Number of threads per process
sometimes restricted due to
overhead.
* Allows multiple threads to run
in parallel on multiprocessors.
One-to-One
Many-to-Many Model
Questions

Queries may be sent to

[email protected]

You might also like