0% found this document useful (0 votes)
28 views19 pages

Operating Systems ch-2 Part-1

Uploaded by

dhatri.ammulu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views19 pages

Operating Systems ch-2 Part-1

Uploaded by

dhatri.ammulu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT-2

PROCESS MANAGEMENT
Process Concept:
Process is a program in execution, which forms the basis of all computation .A system consist of collection
of processes in memory.
1. .Operating system Processes-executes system code
2. User processes--executes user code
All these processes can execute concurrently with in the CPU by multiplexing.

A program is a passive entity, such as a file containing a list of instructions stored on disk.
A process is an active entity with a program counter(PC) value and a set of associated resources.
A program becomes process when an executed file is loaded into memory.

Process consist of
1. Text section-consist of program code, current context. Stack
2. Stack: consist of temporary data such as function parameters,
local variables, return addresses.
3. Data section-which contains global variables.
4. Heap-memory allocated dynamically during process runtime.
Heap
Two or more processes may be associated with same program Data
eg: User may invoke many copies of the web browser program Text

Process States:

1. New State: The process is being created.


2. Running State: A process is said to be running if it has the CPU, that is, process actually using the
CPU at that particular instant.
3. Waiting (Blocked)State: A process is said to be blocked if it is waiting for some event to happen such
that as an I/O completion before it can proceed. Note that a process is unable to run until some external
event happens.
4. Ready State: A process is said to be ready if it is waiting to be assigned a processor.
5. Terminated state: The process has finished execution.
Process Control Block(PCB):

Process state

Process number
Program counter:
registers

Process address space


Memory limits
List of open files
:
:

Process state: The state may be new, ready, running, waiting, halted, and soon.
Program counter: The counter indicates the address of the next instruction to be executed for this
process.
CPU registers: The registers vary in number and type, depending on the computer architecture. They
include accumulators, index registers, stack pointers,andgeneral-purposeregisters,plusanycondition-
codeinformation. Along with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly after ward.
CPU-scheduling information: This information includes a process priority, pointers to scheduling
queues, and another scheduling parameters.
Memory-management information: This information may include such information as the value of
the base and limit registers, the page tables, or the segment tables, depending on the memory system
used by the operating system.
Accounting information: This information includes the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and soon.
status information: The information includes the list of I/O devices allocated to this process, a list of
open files, and soon.

Process Scheduling Queues


Scheduling is the activity that decide the order in which processes are going to executed, we can
utilize the efficiency of CPU in effective manner with the help of multi programming system
Job Queue: This queue consists of all processes in the system; those processes are entered to the
system as new processes.
Ready Queue: This queue consists of the processes that are residing in main memory and are ready
and waiting to execute by CPU. This queue is generally stored as a linked list. A
ready-queue header contains pointers to the first and final PCBs in the list. Each PCB
includes a pointer field that points to the next PCB in the ready queue.
Device Queue: This queue consists of the processes that are waiting for a particular I/O device. Each
device has its own device queue.
Representation of Process Scheduling Queues:

Schedulers
A scheduler is a decision maker that selects the processes from one scheduling queue
to another or allocates CPU for execution. The Operating System has three types of
scheduler:
1. Long-term scheduler or Job scheduler
2. Short-term scheduler or CPU scheduler
3. Medium-terms scheduler

Long-term scheduler or Job scheduler

It is also called a job scheduler. Long term scheduler determines which programs
are admitted to the system for processing. Job scheduler selects processes from the
queue and loads them into memory for execution. The primary objective of the job
scheduler is to provide a balanced mix of jobs, such as I/O bound and processor
bound. It also controls the degree of multi-programming. If the degree of multi
programming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When process changes the
state from new to ready, then there is use of long term scheduler.

Short-term scheduler or CPU scheduler

It is also called CPU scheduler. Main objective is increasing system performance in


accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects process among the processes that are ready
to execute and allocates CPU to one of them. Short term scheduler also known as
dispatcher, executed most frequently and makes the fine grained decision of which
process to execute next. Short term scheduler is faster than Long term scheduler.
Medium-term scheduler

Medium term scheduling is part of the swapping. It removes the processes from the
memory. It reduces the degree of multi-programming. The medium term scheduler is
in-charge of handling the swapped out-processes.
Running process may become suspended if it makes an I/O request. Suspended
processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other process, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is
said to be swapped out or rolled out. Swapping may be necessary to improve the
process mix.

Comparison between Schedulers


S.N. Long Term Scheduler Short Term Medium Term
Scheduler Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short Speed is fastest among Speed is in between both
term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser It reduces the degree of
multi-programming control over degree of multi-programming.
multi-programming
4 It is almost absent or It is also minimal in It is a part of Time sharing
minimal in time sharing time sharing system systems.
system
5 It selects processes from It selects those It can re-introduce the
pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be
continued.

Context Switch
A context switch is the mechanism to store and restore the state or context of a
CPU in Process Control block so that a process execution can be resumed from the
same point at a later time. Using this technique a context switcher enables multiple
processes to share a single CPU. Context switching is an essential part of a
multitasking operating system features. When the scheduler switches the CPU from
executing one process to execute another, the context switcher saves the content of all
processor registers for the process being removed from the CPU, in its

CPU Scheduler
Selects from among the processes in memory that are ready to execute, and allocates
the CPU to one of them
CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start
another running

Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted
until the first response is produced, not output (for time-sharing environment)

Optimization Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time

Scheduling algorithms
Four major scheduling algorithms here which are following :
 First Come First Serve (FCFS) Scheduling
 Shortest-Job-First (SJF) Scheduling
 Priority Scheduling
 Round Robin(RR) Scheduling

First-Come, First-Served Scheduling

By far the simplest CPU-scheduling algorithm is the first-come first-


served (FCFS) scheduling algorithm. With this scheme, the process that requests
the CPU first is allocated the CPU first. The implementation of the FCFS policy
is easily managed with a FIFO queue. When a process enters the ready queue, its
PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to
the process at the head of the queue. The running process is then removed from
the queue .The code for FCFS scheduling is simple to write and understand. The
average waiting time under the FCFS policy, however, is often quite long.
Consider the following set of processes that arrive at time 0, with the
length of the CPU-burst time given in milliseconds:
Process Burst Time
PI 24
P2 3
P3 3
If the processes arrive in the order PI,P2,P3, and are served in FCFS order, we get
the result shown in the following Gantt chart:

The waiting time is 0 milliseconds for process P1, 24milliseconds for process P2,
and 27 milliseconds for process P3. Thus, the average waiting time is (0 + 24 +
27)/3 = 17 milliseconds.
If the processes arrive in theorderP2, P3,Pl, however ,the results will be as shown
in the following Gantt chart:

The average waiting time is now (6 + 0 + 3)/3 = 3 milliseconds. This reduction is


substantial. Thus, the average waiting time under a FCFS policy is generally not
minimal, and may vary substantially if the process CPU-burst times vary greatly.
In addition, consider the performance of FCFS scheduling in a dynamic
situation. Assume we have one CPU-bound process and many I/O-bound
processes. As the processes flow around the system, the following scenario may
result. The CPU-bound process will get the CPU and hold it. During this time, all
the other processes will finish their I/O and move into the ready queue, waiting
for the CPU. While the processes wait in the ready queue, the I/O devices are idle.
Eventually, the CPU-bound process finishes its CPU burst and moves to an I/O
device. All the I/O-bound processes, which have very short CPU bursts, execute
quickly and move back to the I/O queues. At this point , the CPU sits idle.
The CPU-bound process will then move back to the ready queue and be
allocated the CPU. Again, all the I/O processes end up waiting in the ready queue
until the CPU-bound process is done. There is a convoy effect, as all the other
processes wait for the one big process to get off the CPU. This effect results in
lower CPU and device utilization than might be possible if the shorter processes
were allowed to go first.
The FCFS scheduling algorithm is non-preemptive. Once the CPU has
been allocated to a process, that process keeps the CPU until it releases the CPU,
either by terminating or by requesting I/O. The FCFS algorithm is particularly
troublesome for time-sharing systems, where each user needs to get a share of the
CPU at regular intervals. It would be disastrous to allow one process to keep the
CPU for an extended period.

Shortest-Job-First Scheduling

A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling


algorithm. This algorithm associates with each process the length of the latter's
next CPU burst. When the CPU is available, it is assigned to the process that has
the smallest next CPU burst. If two processes have the same length next CPU
burst, FCFS scheduling is used to break the tie. Note that a more appropriate term
would be the shortest next CPU burst, because the scheduling is done by
examining the length of the next CPU burst of a process, rather than its total
length. We use the term SJF because most people and textbooks refer to this type
of scheduling discipline as SJF.

As an example, consider the following set of processes, with the length of


the CPU-burst time given in milliseconds:
Process Burst
Time
PI 6
p2 8
p3 7
p4 3

Using SJF scheduling, we would schedule these


processes according to the following Gantt chart:
The waiting time is 3 milliseconds for process PI, 16 milliseconds for process P2,
9 milliseconds for process P3, and 0 milliseconds for process P4. Thus, the
average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds. If we were using the
FCFS scheduling scheme, then the average waiting time would
be10.25milliseconds.
The SJF scheduling algorithm is provably optimal, in that it gives the
minimum average waiting time for a given set of processes. By moving a short
process before a long one, the waiting time of the short process decreases more
than it increases the waiting time of the long process. Consequently, the average
waiting time decreases.
The real difficulty with the SJF algorithm is knowing the length of the next CPU
request. For long-term (or job) scheduling in a batch system, we can use as the
length the process time limit that a user specifies when he submits the job.
Thus, users are motivated to estimate the process time limit accurately,
since a lower value may mean faster response.(Too low a value will cause a time-
limit exceeded error and require submission.) SJF scheduling is used frequently
in long-term scheduling.
Although the SJF algorithm is optimal, it cannot be implemented at the level of
short-term CPU scheduling There is no way to know the length of the next CPU
burst. One approach is to try to approximate SJF scheduling. We may not know
the length of the next CPU burst, but we may be able to predict its
value.WeexpectthatthenextCPUburstwillbesimilarinlengthtotheprevious ones.
Thus, by computing an approximation of the length of the next CPU burst,
we can pick the process with the shortest predicted CPU burst.
The SJF algorithm may be either pre-emptive or pre-emptive. The choice arises
when a new process arrives at the ready queue while a previous process is executing.
The new process may have a shorter next CPU burst than what is left of the currently
executing process. A preemptive SJF algorithm will preempt the currently executing
process, whereas a pre-emptive SJF algorithm will allow the currently running
process tofinishitsCPUburst.PreemptiveSJFschedulingissometimescalledshortest-
remaining-time-firstscheduling

As an example, consider the following four processes, with the length of the CPU-
burst time given in milliseconds:

Process Arrival Time Burst Time


P1 0 8
P2 1 4
P3 2 9
p4 3 5
If the processes arrive at the ready queue at the times shown and need the
indicated burst times, then the resulting preemptive SJF schedule is as depicted in
the following Gantt chart:

Process P1 is started at time 0, since it is the only process in the queue.


Process P2 arrives at time 1. The remaining time for process P1(7milliseconds)is
larger than the time required by processP2(4milliseconds), so process P1 is
preempted, and process P2 is scheduled. The average waiting time for this
example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4 = 26/4 = 6.5 milliseconds. A
non preemptive SJF scheduling would result in an average waiting time of 7.75
milliseconds.

Priority Scheduling

The SJF algorithm is a special case of the general priority-scheduling


algorithm. A priority is associated with each process, and the CPU is allocated to
the process with the highest priority. Equal-priority processes are scheduled in
FCFS order.
An SJF algorithm is simply a priority algorithm where the priority (p) is
the inverse of the (predicted) next CPU burst. The larger the CPU burst, the
lower the priority , and vice versa.
Note that we discuss scheduling in terms of high priority and low priority.
Priorities are generally some fixed range of numbers, such as 0 to 7, or 0 to 4,095.
However, there is no general agreement on whether 0 is the highest or lowest
priority. Some systems use low numbers to represent low priority; others use low
numbers for high priority. This difference can lead to confusion. In this text, we
use low numbers to represent high priority. As an example, consider the
following set of processes, assumed to have arrived at time0, in the order
PI,P2,P3,with the length of the CPU-burst time given in milliseconds:

Process Burst Time Priority


P1 10 3
p2 1 1
p3 2 4
P4 1 5
P5 5 2
Using priority scheduling, we would schedule these processes according to the
following Gantt chart:
The waiting time is 6 milliseconds for process P1,0 milliseconds for process
P2,1 6 milliseconds for process P3,1 8 milliseconds for process P4,1 milliseconds
for process P5 Thus, the average waiting time is (6+0 + 16 + 18+1)/3 = 8.2
milliseconds.

The average waiting time is 8.2 milliseconds .Priorities can be defined either
internally or externally.
Internally defined priorities use some measurable quantity or quantities to compute
the priority of a process.
For example, time limits, memory requirements, the number of open files, and the
ratio of average I/O burst to average CPU burst have been used in computing
priorities. External priorities are set by criteria that are external to the operating
system, such as the importance of the process, the type and amount of funds being
paid for computer use, the department sponsoring the work, and other, often political,
factors.
Priority scheduling can be either preemptive or non preemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process. A preemptive priority-scheduling algorithm will preempt the CPU if
the priority of the newly arrived process is higher than the priority of the currently
running process. Anon preemptive priority-scheduling algorithm will simply put the
new process at the head of the ready queue.
A major problem with priority-scheduling algorithms is indefinite
blocking (or starvation). A process that is ready to run but lacking the CPU can
be considered blocked-waiting for the CPU. A priority-scheduling algorithm can
leave some low-priority processes waiting in definitely for the CPU
A solution to the problem of indefinite blockage of low-priority processes is aging.
Aging is a technique of gradually increasing the priority of processes that wait in the
system for a long time. For example, if priorities range from 127 (low) to 0 (high),
we could decrement the priority of a waiting process by 1 every 15 minutes.
Eventually, even a process with an initial priority of 127 would have the highest
priority in the system and would be executed .Infact ,it would take no more than 32
hours for a priority 127 process to age to a priority process.

Round-Robin Scheduling

The round-robin (RR) scheduling algorithm is designed especially for


timesharing systems. It is similar to FCFS scheduling, but preemption is added to
switch between processes. A small unit of time, called a time quantum (or time
slice), is defined. A time quantum is generally from 10 to 100 milliseconds. The
ready queue is treated as a circular queue. The CPU scheduler goes around the
ready queue, allocating the CPU to each process for a time interval of
upto1timequantum.
To implement RR scheduling, we keep the ready queue as a FIFO queue
of processes. New processes are added to the tail of the ready queue. The CPU
scheduler picks the first process from the ready queue, sets a timer to interrupt
after 1timequantum,and dispatches the process.
One of two things will then happen.
 The process may have a CPU burst of less than 1 time quantum. In this case, the
process itself will release the CPU voluntarily. The scheduler will then proceed
to the next process in the ready queue. Otherwise,
 if the CPU burst of the currently running process is longer than 1 time
quantum,thetimerwillgooffandwillcauseaninterrupttotheoperatingsystem.Aconte
xtswitchwillbeexecuted,andthe process will be put at the tail of the ready queue.
The CPU scheduler will then select the next process in the ready queue.
The average waiting time under the RR policy ,how ever, is often quite
long.
Consider the following set of processes that arrive at time 0, with the length of the
CPU-burst time given In milliseconds:
Process Burst Time
PI 24
P2 3
P3 3
If we use a time quantum of 4 milliseconds, then process P1 gets the first 4
milliseconds. Since it requires another 20 milliseconds, it is preempted after the
first time quantum, and the CPU is given to the next process inthe queue, process
P2. Since process P2 does not need 4 milliseconds, it quits before its time
quantum expires .The CPU is then given to the next process, process P3. Once
each process has received 1 time quantum, the CPU is returned to processP1foran
additional time quantum. The resulting RR schedule is

Theaveragewaitingtimeis17/3=5.66milliseconds.
In the RR scheduling algorithm, no process is allocated the CPU for more
than 1 time quantum in a row .If a process' CPU burst exceeds 1 time quantum,
that process is preempted and is put back in the ready queue. The RR scheduling
algorithm is preemptive.
If there are n processes in the ready queue and the time quantum is q, then
each process gets l/n of the CPU time in chunks of at most q time units. Each
process must wait no longer than (n - 1) x q time units until its next time quantum.
For example, if there are five processes, with a time quantum of 20 milliseconds,
then eachprocesswillgetupto20millisecondsevery100 milliseconds.
The performance of the RR algorithm depends heavily on the size of the
time quantum. At one extreme, if the time quantum is very large (infinite), the
RR policy is the same as the FCFS policy. If the time quantum is very small (say
1 microsecond), the RR approach is called processor sharing, and appears (in
theory) to the users as though each of n processes has its own processor running
at l/n the speed of the real processor. This approach was used in Control Data
Corporation (CDC) hardware to implement 10 peripheral processors with only
one set of hardware and 10 sets of registers. The hardware executes one
instruction for one set of registers, then goes onto the next. This cycle continues,
resulting in 10 slow processors rather than one fast processor.

Thread
A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU
utilization; it comprises a thread ID, a program counter, a register set, and a stack. It
shares with other threads belonging to the same process its code section, data section,
and other operating-system resources, such as open files and signals. A traditional
(or heavy weight) process has a single thread of control. If the process has multiple
threads of control, it can do more than one task at a time.
Motivation
Many software packages that run on modern desktop PCs are multi threaded. An
application typically is implemented as a separate process with several thread of
control.
Single-threaded and multi threaded
Ex: A web browser might have one thread display image sort ext while
another thread retrieves data from the network. A word processor may have a
thread for displaying graphics, another thread for reading key
Strokes from the user, and a third thread for performing spelling and
grammar checking in the back ground.
In certain situations a single application may be required to perform
several similar tasks. For example, a web server accepts client requests for web
pages, images, sound, and so forth. A busy web server may have several (perhaps
hundreds) of clients concurrently accessing it. If the web server ran as a
traditional single-threaded process, it would be able to service only one client at a
time.
One solution is to have the server run as a single process that accepts
requests. When the server receives a request, it creates a separate process to
service that request. In fact, this process-creation method was in common use
before threads became popular. Process creation is very heavyweight, as was
shown in the previous chapter. If the new process will perform the same tasks as
the existing process, why incur all that overhead? It is generally more
efficientforoneprocessthatcontainsmultiplethreadstoservethesamepurpose. This
approach would multithread the web-server process. The server would create a
separate thread that would listen for client requests; when a request was made,
rather than creating another process, it would create another thread to service the
request.
Threads also play a vital role in remote procedure call (RPC)systems.
RPCs allow inter-process communication by providing a communication
mechanism similar to ordinary function or procedure calls. Typically, RPC
servers are multithreaded. When a server receives a message, it services the
message using a separate thread. This allows the server to service several
concurrent requests.

Benefits
The benefits of multi threaded programming can be broken down into four
major categories:
1. Responsiveness: Multithreading an interactive application may allow a
program to continue running even if part of it is blocked or is performing a
lengthy operation, thereby increasing responsiveness to the user. For instance, a
multithreaded web browser could still allow user interaction
In one thread while an image is being loaded in another thread.
2. Resource sharing: By default, threads share the memory and the resources of
the process to which they belong. The benefit of code sharing is that it allows an
application to have several different threads of activity all with in the same
address space.
3. Economy: Allocating memory and resources for process creation is costly.
Alternatively, because threads share resources of the process to which they
belong, it is more economical to create and context switch threads. It can be
difficult to gauge empirically the difference in overhead for creating and
maintaining a process rather than a thread, but in general it is much more time
consuming to create and manage processes than threads. In Solaris 2, creating a
process is about 30 times slower than is creating a thread, and context switching
is about five times slower.
4. Utilization of multiprocessor architectures: The benefits of multithreading
can be greatly increased in a multiprocessor architecture, where each thread may
be running in parallel on a different processor. A single-threaded process can
only run on one CPU, no matter how many are available.

Multithreading on a multi-CPU machine increases concurrency. In a


single processor architecture, the CPU generally moves between each thread so
quickly as to create an illusion of parallelism, but in reality onlyonethreadis
running at a time.

The OS supports the threads that can provided in following two levels:
User-Level Threads
User-level threads implement in user-level libraries, rather than via
systems calls, so thread switching does not need to call operating system and to
cause interrupt to the kernel. In fact, the kernel knows nothing about user-level
threads and manages them as if they were single-threaded processes.
Advantages:
 User-level threads do not require modification to operating systems.
 Simple Representation: Each thread is represented simply by a PC , registers,
stack and a small control block, all stored in the user process address space.
 Simple Management: This simply means that creating a thread, switching
between threads and synchronization between threads can all be done without
intervention of the kernel.
 Fast and Efficient: Threads witching is not much more expensive than a
procedure call.
Disadvantages:
 Thereisalackofcoordinationbetweenthreadsandoperatingsystemkernel.
 User-levelthreadsrequirenon-blockingsystemscalli.e.,amultithreaded kernel.
Kernel-Level Threads
In this method, the kernel knows about and manages the threads. Instead of
thread table in each process, the kernel has a thread table that keeps track of all
threads in the system. Operating Systems kernel provides system call to c
 Because kernel has full knowledge of all threads, Scheduler may decide to
give more time to a process having large number of threads than process
having small number of threads.
 Kernel-level threads are especially good for applications that frequently block.
Disadvantages:
 The kernel-level threads are slow and inefficient. For instance, threads
operations are hundreds of times slower than that of user-level threads.
Since kernel must manage and schedule threads as well as processes. It require
a full thread control block (TCB) for each thread to maintain information about
threads. As a result there is significant over head and increased in kernel
complexity.

Multi-threading Models
Many systems provide support for both user and kernel threads, resulting in
different multithreading models. We look at three common types of threading
implementation.

Many-to-One Model
The many-to-one model maps many user-level threads to one kernel thread. Thread
management is done in user space, so it is efficient, but the entire process will block
if a thread makes a blocking system call .Also, because only one thread can access
the kernel at a time, multiple threads are unable to run in parallel on multiprocessors.

One-to-one Model
The one-to-one model maps each user thread to a kernel thread. It
provides more concurrency than the many-to-one model by allowing another
thread to run when a thread makes a blocking system call; it also allows multiple
threads to run in parallel on multiprocessors. The only drawback to this model is
that creating a user thread requires creating the corresponding kernel thread.
Because the overhead of creating kernel threads can burden the performance of
an application, most implementations of this model restrict the number of threads
supported by the system. Windows NT,Windows2000,andOS/2implementthe
one-to-one model.

Many-to-Many Model
The many-to-many model multiplexes many user-level threads to a
smaller or equal number of kernel threads. The number of kernel threads may be
specific to either a particular application or a particular machine(an application
may be allocated more kernel threads on a multiprocessor than on a uni
processor). Whereas the many-to-one model allows the developer to create as
many user threads as she wishes, true concurrency is not gained because the
kernel can schedule only one thread at a time. The one-to-one model allows for
greater concurrency, but the developer has to be careful not to create too many
threads within an application (and in some instances may be limited in the
number of threads she can create). The many-to-many model suffers from neither
of these short comings: Developers can create as many user threads as necessary,
and the corresponding kernel threads can run in parallel on a multiprocessor.
Process vs. Thread

Process Thread
1.Process cannot share the same 1. Threads can share memory and files.
memory area(address space)
2.It takes more time to create a process 2.It takes less time to create a thread.
3.It takes more time to complete the execution 3.Less time to terminate.
and terminate.
4.Execution is very slow. 4.Execution is very fast.
5.It takes more time to switch between 5.It takes less time to switch between two
two processes. threads.
6.System calls are required to communicate each
other 6.System calls are not required.
7.It requires more resources to execute. 7.Requires fewer resources.
8.Communication between two threads are
8.Implementing the communication
very easy to implement because threads
between processes is bit more difficult.
share the memory

You might also like