0% found this document useful (0 votes)
13 views24 pages

Os Unit-3 Part 1 Nsa

The document discusses the concept of processes in operating systems, detailing their components, creation, termination, and state transitions. It explains how processes are more than just program code, including their execution state and associated resources, and outlines the operations such as process creation and termination, along with the process control block (PCB). Additionally, it covers process scheduling and the various queues involved in managing processes within the system.

Uploaded by

lovelysharmacji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views24 pages

Os Unit-3 Part 1 Nsa

The document discusses the concept of processes in operating systems, detailing their components, creation, termination, and state transitions. It explains how processes are more than just program code, including their execution state and associated resources, and outlines the operations such as process creation and termination, along with the process control block (PCB). Additionally, it covers process scheduling and the various queues involved in managing processes within the system.

Uploaded by

lovelysharmacji
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT-3rd (Process Concept, CPU Scheduling & Deadlock)

Process Concept:
 A process is a program in execution. Components of the process are :
1. Object Program
2. Data
3. Resources
4. States of the process execution
Object program i.e. code to be executed. Data is used for executing program. While executing
the program, it may require some resources. Last component is used for verifying the status of
the process execution.
 A process is more than the program code, which is sometimes known as the text section.
It also includes the current activity, as represented by the value of the program counter
and the contents of the processor's registers. A process generally also includes the process
stack, which contains temporary data (such as function parameters, return addresses, and
local variables), and a data section, which contains global variables. A process may also
include a heap, which is memory that is dynamically allocated during process run time.

Max stack

heap

data

text
0
Figure 1: Process in memory

A program is a passive entity, such as a file containing a list of instructions stored on disk (often
called an executable file), whereas a process is an active entity, with a program counter
specifying the next instruction to execute and a set of associated resources. A program becomes a
process when an executable file is loaded into memory.

NOTE: Two processes may be associated with the same program, but considered as two separate
execution sequences. For instance, several users may invoke many copies of the Web browser program.
Each of these is a separate process; and although the text sections are equivalent, the data, heap, and stack
sections vary.

1
Operations on processes:
Process Creation: A process may create several new processes, via a system call, during the
course of execution.
Some common events leads to the creation of a process in different environment are:

1. In a batch environment, a process is created in response to the submission of a job.


2. In an interactive environment, a process is created when a new user attempts to log on.
3. An OS may also create a process on behalf of an application. For example, if a user
requests that a file be printed, the OS can create a process that will manage the printing.
4. One process cause the creation of another process. For example, a server process (e.g.,
print server, file server) may generate a new process for each request that it handles.

When the OS creates a process at the explicit request of another process, the action is referred to
as process spawning. When one process spawns another, the former is referred to as the
parent process, and the spawned process is referred to as the child process. Child process
may create another sub-process, so it forms a tree of processes.

When a process creates a new process, two possibilities exist in term of execution:

 The parent continues to execute concurrently with its children.


 The parent waits until some or all of its children have terminated.

There are also two possibilities in terms of the address space of the new process:

 The child process is a duplicate of the parent process (it has the same program and data as
the parent).
 The child process has a new program loaded into it.

NOTE:

fork() : In UNIX, a new process is created by the fork() system call. It takes no arguments and returns a
process ID. The new process consists of an exact copy of the address space of the original
process.Therefore the parent and child processes have separate address spaces. Both processes (the parent
and the child) continue execution at the instruction after the fork (), with one difference:

 fork() returns a zero to the newly created (child) process,


 whereas the (nonzero-a positive value) process identifier of the child is returned to the parent.
 If fork() returns a negative value , the creation of child process was unsuccessful.

CreateProcess() : In Windows, a new process is created by the CreateProcess() function. However,


whereas fork() has the child process inheriting the address space of its parent CreateProcess () requires
loading a specified program into the address space of the child process at process creation. Furthermore,
whereas fork() is passed no parametersf CreateProcess () expects no fewer than ten parameters.

2
Process Termination:

A process terminates when it finishes executing its final statement and asks the operating system
to delete it by using the exit () system call. All the resources of the process including physical
and virtual memory, open files, and I/0 buffers are de-allocated by the operating system.

Some common events leads to the termination of a process on different environment are:

1. A batch job should include a Halt instruction or an explicit OS service call for
termination. In the former case, the Halt instruction will generate an interrupt to alert the
OS that a process has completed.
2. For an interactive application, the action of the user will indicate when the process is
completed. For example, in a time-sharing system, the process for a particular user is to
be terminated when the user logs off or turns off his or her terminal. On a personal
computer or workstation, a user may quit an application (e.g., word processing or
spreadsheet).
3. Additionally, a number of error and fault conditions can lead to the termination of a
process, such as: time limit exceeded, memory unavailable, bounds violation, protection
error, arithmetic error, I/O failure, parent request etc.

In the parent-child process scenario if child executes exit() system call, the process may return a
status value (typically an integer) to its parent process.

A parent may terminate the execution of one of its children for a variety of reasons, such as
these:

 The child has exceeded its usage of some of the resources that it has been allocated.
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does not allow a child to continue if its
parent terminates.

NOTE: Cascading termination- Some systems, including VMS, do not allow a child to exist if its parent
has terminated. In such systems, if a process terminates (either normally or abnormally), then all its
children must also be terminated. This phenomenon, referred to as cascading termination.

Process States:

As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process.

Two State Process Model: Process may be in one of the two states;

 Running
 Not running

3
Figure 2: Two State Process Model Transition Diagram

When a new process is created by OS, that process enters into the system in the Not-running
state.

If the process is not in the running state then it can either be in blocked state or may be
competing to get the CPU time. So the dispatcher was not able to decide which process to bring
in the memory in the running state. Thus segregation between a blocked state and ready state was
required.

Five State Process Model: Each process may be in one of the following states;

 New: The process is being created.


 Running: Instructions are being executed.
 Waiting/Blocked: The process is waiting for some event to occur (such as an I/0
completion or reception of a signal).
 Ready: The process is waiting to be assigned to a processor.
 Terminated: The process has finished execution.

Figure 3: Five State Process Model Transition Diagram

4
Five State Model Transitions:

 Null – New: A new process is created due to any of four reasons; New Batch Job,
Interactive Login, to provide service and spawning.
 New – Ready: Operating system moves a process from the new state to the Ready state
when it is prepared to take on an additional process.
 Ready – Running: Any process can be moved from ready to running state whenever it is
scheduled. This is the job of the scheduler or dispatcher.
 Running – Exit: The currently running process is terminated by the OS if the process
indicates that it has completed, or if it aborts.
 Running – Ready: The most commonly known situation is that currently running process
has taken its share of time for execution (Time Out). Also in some events a process may
have to be admitted from running to ready if a high priority process has occurred.
 Running – Blocked: A process is moved to the blocked state if it requested something
(data) for which it may have to wait.
 Blocked – Ready: A process in the blocked state is moved to the ready state when the
event for which it has been waiting occurs.
 Ready – Exit: This is the case for example a parent process has generated a single or
multiple children processes and they are in the ready state. Now during the execution of
the process it may terminate any child process, therefore it will directly go to exit state.
 Blocked – Exit: Similarly as above, during the execution of a parent process any child
process waiting for an event occur may directly go to exit if the parent itself terminates.

Suspend Process:

 Processor is faster than input/output so all processes could be waiting for input/output.
 Swap these processes to disk to free up more memory.
 Blocked state becomes suspended state when swapped to disk.

Figure 4: Process State Transition Diagram with one suspend state

5
Modified Suspend Model :

 Two new state


 Blocked suspend
 Ready suspend

Figure 5: Process State Transition Diagram with two suspend state

Each process may be in one of the following states;

 New: The process is being created.


 Running: Instructions are being executed.
 Waiting/Blocked: The process is in main memory and awaiting an event.
 Ready: The process is in main memory and available for execution.
 Terminated/Exit: The process has finished execution.
 Blocked/Suspend: The process is in secondary memory and awaiting an event.
 Ready/Suspend: The process is in secondary memory but is available for execution as
soon as it is loaded into main memory.

Suspend Process Transitions:

 Blocked - Blocked/Suspend: If there are no ready processes, then at least one blocked
process is swapped out to make room for another process that is not blocked.
 Blocked/Suspend - Ready/Suspend: A process in the Blocked/Suspend state is moved to
the Ready/Suspend state when the event for which it has been waiting occurs.
 Ready/Suspend - Ready: When there are no ready processes in main memory, the OS will
need to bring one in to continue execution. In addition, it might be the case that a process

6
in the Ready/Suspend state has higher priority than any of the processes in the Ready
state.
 Ready - Ready/Suspend: it may be necessary to suspend a ready process if that is the only
way to free up a sufficiently large block of main memory. Also, the OS may choose to
suspend a lower-priority ready process rather than a higher priority blocked process if it
believes that the blocked process will be ready soon.
 New - Ready/Suspend and New - Ready: When a new process is created, it can either be
added to the Ready queue or the Ready/Suspend queue.
 Blocked/Suspend - Blocked: Consider a situation that, a process terminates, freeing up
some main memory. There is a process in the (Blocked/Suspend) queue with a higher
priority than any of the processes in the (Ready/Suspend) queue and the OS has
reason to believe that the blocking event for that process will occur soon. Under these
circumstances, it would seem reasonable to bring a blocked process into main memory in
preference to a ready process.
 Running - Ready/Suspend: Normally, a running process is moved to the Ready state when
its time allocation expires. If, however, the OS is preempting the process because a
higher-priority process on the Blocked/Suspend queue has just become unblocked, the
OS could move the running process directly to the (Ready/Suspend) queue and free some
main memory.
 Any State - Exit: Typically, a process terminates while it is running, either because it has
completed or because of some fatal fault condition. However, in some operating systems,
a process may be terminated by the process that created it or when the parent process is
itself terminated.

Process Control Block (PCB):


Each process is represented in the operating system by a process control block (PCB) – also
called a task control block.

Pointer
Process state
Process number
Program counter
registers
Memory limit
List of open files
. . . .
Figure 6: Process Control Block (PCB)

It contains following information associated with a specific process:

7
 Process state: The state may be new, ready, running, waiting, terminated, and so on.
 Pointer: Each PCB includes a pointer field that points to the next PCB in the ready
queue.
 Program counter: The counter indicates the address of the next instruction to be
executed for this process.
 CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information.

Along with the program counter, this state information must be saved when an interrupt
occurs, to allow the process to be continued correctly afterward.

Figure 7: Diagram showing CPU switch from process to process

 CPU-scheduling information: This information includes a process priority, and any


other scheduling parameters.
 Memory-management information: This include the information of the base and limit
registers, the page tables, or the segment tables, depending on the memory system used
by the operating system.
 Accounting information: This information includes the amount of CPU used, time
limits, job or process numbers, and so on.
 I/O status information: This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.

8
In brief, the PCB simply serves as the repository for any information that may vary from process
to process.

Process Scheduling:
This mechanism handles the removal of the running process from the CPU and the selection of
another process on the basis of the particular strategy.

Scheduling Queues:

 Job queue - As processes enter the system, they are put into a job queue, which consists
of all processes in the system.
 Ready queue - The processes that are residing in main memory and are ready and waiting
to execute are kept on a list called the ready queue.
 Device queue - Set of processes waiting for a particular device. Each device has its own
device queue.

Queue is generally stored as linked list. Queue header contains pointers to the first and final
PCBs in the list. Each PCB includes a pointer field that points to the next PCB in the ready
queue.

Figure 8: The Ready Queue and Various I/O device queue

A new process is initially put in the ready queue. It waits there until it is selected for
execution, or is dispatched. Once the process is allocated the CPU and is executing, one of
several events could occur:
 The process could issue an I/0 request and then be placed in an I/0 queue.

9
 The process could create a new sub-process and wait for the sub-process termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt, and
be put back in the ready queue.

Figure 9: Queuing-diagram representation of process scheduling

In the first two cases, the process eventually switches from the waiting state to the ready state
and is then put back in the ready queue. A process continues this cycle until it terminates. At the
time of termination it is removed from all queues and has its PCB and resources de-allocated.

Schedulers:

A process migrates among the various scheduling queues throughout its lifetime. The operating
system must select, for scheduling purposes, processes from these queues in some fashion. The
selection process is carried out by the appropriate scheduler.
Schedulers are of three types:
1. Long term scheduler (job schedulers)
2. Short term scheduler (CPU schedulers)
3. Medium term scheduler

Long term scheduler (new to ready):

 It is also called as job scheduler.


 It selects processes from the job pool and loads them into memory for execution. The
long term scheduler executes much less frequently.
 The long-term scheduler controls the degree of multiprogramming (the number of
processes in memory). Because of the longer interval between executions, the long-term
scheduler can afford to take more time to decide which process should be selected for
execution.
A process can be described as either I/ 0 bound or CPU bound. An I/O-bound process is one that
spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in
contrast, generates I/0 requests infrequently, using more of its time doing computations. It is

10
important that the long-term scheduler select a good process mix of I/O-bound and CPU-bound
processes. The system with the best performance will have a combination of CPU-bound and
I/O-bound processes.

NOTE: On some systems, the long-term scheduler may be absent or minimal. For example, time-sharing
systems such as UNIX and Microsoft Windows systems often have no long-term scheduler but simply put
every new process in memory for the short-term scheduler.

Short term scheduler (CPU scheduler):

 The short-term scheduler, or CPU scheduler, selects from among the processes that are
ready to execute and allocates the CPU to one of them.
 The short-term scheduler must select a new process for the CPU frequently.
 Because of the short time between executions, the short-term scheduler must be fast.

Medium term scheduler:

Figure 10: Addition of medium-term scheduling to the queueing

 The key idea behind a medium-term scheduler is that sometimes it can be advantageous
to remove processes from memory and thus reduce the degree of multiprogramrning.
Later, the process can be reintroduced into memory, and its execution can be continued
where it left off. This scheme is called swapping.
 The process is swapped out, and is later swapped in, by the medium-term scheduler.
 Swapping may be necessary to improve the process mix or because a change in memory
requirements has overcommitted available memory, requiring memory to be freed up.

11
Fig: Levels of scheduling

12
Context Switch:

 Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context switch.
 When a context switch occurs, the kernel saves the context of the current process in its
PCB and loads the saved context of the new process scheduled to run.
 Context-switch time is pure overhead, because the system does no useful work while
switching.
 Context-switch times are highly dependent on hardware support.

Comparison between schedulers:

Sr. No. Long Term Short Term Medium Term


1. It is job scheduler. It is CPU scheduler. It is swapping.
2. Speed is less than short Speed is very fast. Speed is in between both.
term scheduler.
3. It controls the degree of Less control over degree of Reduce the degree of
multiprogramming. multiprogramming. multiprogramming.
4. Absent or minimal in time Minimal in time sharing Time sharing system use
sharing system. system. medium term scheduler.
5. It selects processes from It selects from among the Process can be
pool and load them into processes that are ready to reintroduced into memory
memory for execution. execute. and its execution can be
continued.
6. Process state is: New to Process state is: Ready to --------
Ready. running.

Inter-process Communication:
Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.
A process is independent if it cannot affect or be affected by the other processes executing in the
system. Any process that does not share data with any other process is independent.
A process is cooperating if it can affect or be affected by the other processes executing in the
system. So, any process that shares data with other processes is a cooperating process.

There are several reasons for providing an environment that allows process cooperation:
 Information sharing
 Computation speedup
 Modularity
 Convenience

13
Cooperating processes require an inter-process communication (IPC) mechanism that will allow
them to exchange data and information. These are some fundamental models of inter-process
communication:
1) shared memory
2) message passing
3) Naming
4) Synchronization
5) Buffering

Shared Memory:

In the shared-memory model, a region of memory that is shared by cooperating processes is


established. A shared-memory region resides in the address space of the process creating the
shared-memory segment. Other processes that wish to communicate using this shared-memory
segment must attach it to their address space. Processes can then exchange information by
reading and writing data to the shared region. The form of the data and the location are
determined by these processes and are not under the operating system's control. The processes
are also responsible for ensuring that they are not writing to the same location simultaneously.

Shared memory is faster than message passing, in shared-memory systems, system calls are
required only to establish shared-memory regions. Once shared memory is established, all
accesses are treated as routine memory accesses, and no assistance from the kernel is required.
Shared memory allows maximum speed and convenience of communication.

Figure 11: Communications models. (a) Message passing. (b) Shared memory.

14
Message passing:

In the message-passing model, communication takes place by means of messages exchanged


between the cooperating processes. Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the same address space and is
particularly useful in a distributed environment, where the communicating processes may reside
on different computers connected by a network. For example, a chat program.
Message passing is slower than Shared memory, as message-passing systems are typically
implemented using system calls and thus require the more time-consuming task of kernel
intervention.
Message passing is useful for exchanging smaller amounts of data and is also easier to
implement than shared memory.

The actual function of message-passing is normally provided in the form of a pair of primitives:
 Send(message)
 Receive(message)
If processes P and Q want to communicate, they must send messages to and receive messages
from each other; a communication link must exist between them. Here are several methods for
logically implementing a link and the send () / receive () operations:
 Direct or indirect communication
 Synchronous or asynchronous communication
 Automatic or explicit buffering
Note: Message passing is used as a method of communication in micro-kernels.

Addressing (Naming):

Processes that want to communicate must have a way to refer to each other. The various schemes
for specifying processes in send and receive primitives are of two types:
1. Direct communication
2. Indirect communication

Direct Communication: In direct communication, each process that wants to communicate must
explicitly name the recipient or sender of the communication. In this scheme, the send() and
receive() primitives are defined as:
 send (P, message) - Send a message to process P.
 receive (Q, message)- Receive a message from process Q.

A communication link in this scheme has the following properties:


 A link is established automatically between every pair of processes that want to
communicate. The processes need to know only each other's identity to communicate.
 A link is associated with exactly two processes.
 Between each pair of processes, there exists exactly one link.

This scheme exhibits symmetry in addressing; that is, both the sender process and the receiver
process must name the other to communicate.

15
A variant of this scheme employs asymmetry in addressing. Here, only the sender names the
recipient; the recipient is not required to name the sender. In this scheme, the send() and
receive() primitives are defined as follows:
 send(P, message) - Send a message to process P.
 receive (id, message) -Receive a message from any process; the variable id is set to the
name of the process with which communication has taken place.

Indirect Communication: In indirect communication, the messages are sent to and received from
mailboxes, or ports. Each mailbox has a unique identification. Two processes can communicate
only if the processes have a shared mailbox. The send() and receive() primitives are defined as
follows:
 send (A, message) -Send a message to mailbox A.
 receive (A, message)-Receive a message from mailbox A.

In this scheme, a communication link has the following properties:

 A link is established between a pair of processes only if both members of the pair have a
shared mailbox.
 A link may be associated with more than two processes.
 Between each pair of communicating processes, there may be a number of different links,
with each link corresponding to one mailbox.
A mailbox may be owned either by a process or by the operating system. When a process that
owns a mailbox terminates, the mailbox disappears. If mailbox is owned by the operating
system, then it must provide a mechanism that allows a process to: Create a new mailbox, Send
and receive messages through the mailbox, Delete a mailbox.

Synchronization:

Communication between processes takes place through calls to send () and receive () primitives.
Message passing may be either blocking or non-blocking also known as synchronous and
asynchronous.
 Blocking send - The sending process is blocked until the message is received by the
receiving process or by the mailbox.
 Non-blocking send - The sending process sends the message and resumes operation.
 Blocking receive - The receiver blocks until a message is available.
 Non-blocking receive - The receiver retrieves either a valid message or a null.

Different combinations of send () and receive () are possible. When both send () and receive ()
are blocking, we have a rendezvous between the sender and the receiver. This combination
allows for tight synchronization between processes.

Buffering:

Whether communication is direct or indirect, messages exchanged by communicating processes


reside in a temporary queue. Basically, such queues can be implemented in three ways:

16
 Zero capacity - The queue has a maximum length of zero; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.
 Bounded capacity - The queue has finite length. If the queue is not full when a new
message is sent, the message is placed in the queue and the sender can continue execution
without waiting. The link's capacity is finite, however. If the link is full, the sender must
block until space is available in the queue.
 Unbounded capacity - The queue's length is potentially infinite; thus, any number of
messages can wait in it. The sender never blocks.
The zero-capacity case is sometimes referred to as a message system with no buffering; the other
cases are referred to as systems with automatic buffering.

Threads:
 Separate path of execution because it have a separate call stack per thread.
 A thread is also known as lightweight process (LWP).
 A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a
register set, and a stack. It shares with other threads belonging to the same process its
code section, data section, and other operating-system resources, such as open files and
signals. A traditional (or heavyweight) process has a single thread of control. If a process
has multiple threads of control, it can perform more than one task at a time.
A Web browser might have one thread display images or text while another thread retrieves data
from the network. In certain situations, a single application may be required to perform several
similar tasks. For example, a Web server accepts client requests for web pages, images, sound,
and so forth. A busy Web server may have several clients concurrently accessing it. If the Web
server ran as a traditional single-threaded process, it would be able to service only one client at a
time, and a client might have to wait a very long time for its request to be serviced.

Another example, A word processor may have threads for;

 Displaying graphics
 Responding to key stroke from user
 Performing spelling and grammar checking.

Figure 12: Single threaded and multi threaded process

17
If the Web-server process is multithreaded, the server will create a separate thread that listens for
client requests. When a request is made, rather than creating another process, the server will
create a new thread to service the request and resume listening for additional requests.

Figure 13: Multi threaded server architecture

NOTE: Suspending a process involves suspending all threads of the process since all threads share the
same address space. Termination of a process, terminates all threads within the process.

Benefits of threads:

 Takes less time to create a new thread than a process.


 Takes less time to terminate a thread than a process.
 Takes less time to switch between two threads within the same process since, threads
within the same process share memory and files, they can communicate with each other
without invoking the kernel.
 Thread utilizes the multiprocessor architecture.

Figure 14: Process and Threads

18
User and Kernel Level Threads:
User level threads:
In a user level thread, all of the work of thread management is done by the application and the
kernel is not aware of the existence of threads. The threads library contains code for creating and
destroying threads, for passing messages and data between threads, for scheduling thread
execution, and for saving and restoring thread contexts.

Advantages:

 Thread switching does not require kernel mode privileges.


 User level threads can run on any operating system.
 User level threads are generally fast to create and manage.
Disadvantages:

 A multithreaded application cannot take advantage of multiprocessing. A kernel assigns


one process to only one processor at a time. Therefore, only a single thread within a
process can execute at a time.
Kernel level threads:
In kernel level thread, thread management is done by the kernel. There is no thread management
code in the application level. The kernel can simultaneously schedule multiple threads from the
same process on multiple processors. And, if one thread in a process is blocked, the kernel can
schedule another thread of the same process.

The kernel performs thread creation, scheduling and management in kernel space.
Advantages:

 Kernel can simultaneously schedule multiple threads from the same process on multiple
processors.
 If one thread in a process is blocked, the kernel can schedule another thread of the same
process.
 Kernel routines themselves can be multithreaded.
Disadvantages:

 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within same process requires a mode
switch to the kernel.

Multithreading Models:
A relationship must exist between user threads and kernel threads. There are three common ways
of establishing such a relationship.
1. Many – to – One

19
2. One – to – One
3. Many – to – Many
Many – to – One:
The many-to-one model maps many user-level threads to one kernel thread. Thread management
is done by the thread library in user space, so it is efficient; but the entire process will block if a
thread makes a blocking system call. Also, because only one thread can access the kernel at a
time, multiple threads are unable to run in parallel on multiprocessors.

Figure 15: Many – to – one model.

One – to – One:
The one-to-one model maps each user thread to a kernel thread. It provides more concurrency
than the many-to-one model by allowing another thread to run when a thread makes a blocking
system call; it also allows multiple threads to run in parallel on multiprocessors. The only
drawback to this model is that creating a user thread requires creating the corresponding kernel
thread. Creating kernel threads can burden the performance of an application.
Example: Linux, along with the family of Windows operating systems, implement the one-to-one
model.

Figure 16: One – to – one model.

20
Many – to – Many:
The many-to-many model multiplexes many user-level threads to a smaller or equal number of
kernel threads. The number of kernel threads may be specific to either a particular application or
a particular machine.

Figure 17: (a) Many – to – Many model. (b) Two - level model.

One popular variation on the many-to-many model still multiplexes many user-level threads to a
smaller or equal number of kernel threads but also allows a user-level thread to be bound to a
kernel thread. This variation sometimes referred to as the two-level model.

Thread Libraries:
A thread library provides the programmer with an API for creating and managing threads. There
are two primary ways of implement a thread library.

 The first approach is to provide a library entirely in user space with no kernel support.
All code and data structures for the library exist in user space. This means that invoking a
function in the library results in a local function call in user space and not a system call.
 The second approach is to implement a kernel-level library supported directly by the
operating system. In this case, code and data structures for the library exist in kernel
space. Invoking a function in the API for the library typically results in a system call to
the kernel.
Three main thread libraries are in use now a day:
1. Pthreads – Pthreads may be provided as either a user- or kernel-level library.
2. Win32 – The Win32 thread library is a kernel-level library available on Windows
systems.
3. Java – The Java thread API allows threads to be created and managed directly in Java
programs. Java thread API is generally implemented using a thread library available on
the host system.

21
Difference between User Level and Kernel Level Thread:

Sr. No. User level threads Kernel level threads


1. User level threads are faster to create and Kernel level threads are slower to create
manage. and manage.
2. User threads are supported above the kernel threads are supported and
kernel and are managed without kernel managed directly by the operating
support. system.
3. User level threads can run on any Kernel level threads are specific to the
operating system. operating system.
4. Multithreaded applications cannot take Kernel routines themselves can be
advantage of multiprocessing. multithreaded.

Process VS Threads:
Similarities:

 Like processes thread share CPU and only one thread active (running) at a time.
 Like processes, threads within a process executes concurrently.
 Like processes, thread can create children.
 Like process, if one thread is blocked another thread can run.
Differences:

Sr. Process Thread


No.
1. Process is called heavy weight process. Thread is called light weight process.
2. Processes are independent of each other. Threads are not independent of one
i.e. in multiple process each process another. i.e. one thread can read, write or
operates independently of each other. even completely wipe out another thread
stack.
3. In multiple process implementations each All threads can share same set of open
process executes the same code but has its files, child processes.
own memory and file resources.
4. If one server process is blocked no other While one server thread is blocked and
server process can execute until the first waiting, second thread in the same task
process unblocked. could run.
5. The processes might or might not assist Threads are design to assist one another.
one another because processes may
originate from different users.

22
Threading Issues:

1. The fork() and exec() system calls


The semantics of the fork() and exec() system calls change in a multithreaded program.

If one thread in a program calls fork(), does the new process duplicate all threads, or is
the new process single-threaded?

UNIX systems have chosen to have two versions of fork(), one that duplicates all threads
and another that duplicates only the thread that invoked the fork() system call.
The exec() system call typically works in the same way as, if a thread invokes the exec()
system call, the program specified in the parameter to exec () will replace the entire
process-including all threads.

2. Cancellation
Thread cancellation is the task of terminating a thread before it has completed. For
example, when a user presses a button on a Web browser that stops a Web page from
loading any further. Often, a Web page is loaded using several threads-each image is
loaded in a separate thread. When a user presses the stop button on the browser, all
threads loading the page are canceled.
A thread that is to be canceled is often referred to as the target thread.
Cancellation of a target thread may occur in two different scenarios:
 Asynchronous cancellation - One thread immediately terminates the target
thread.
In asynchronous cancellation, the difficulty with cancellation occurs in situations
where resources have been allocated to a canceled thread or a thread is canceled
while in the midst of updating data it is sharing with other threads.
 Deferred cancellation - The target thread periodically checks whether it should
terminate, allowing it an opportunity to terminate itself in an orderly fashion.
With deferred cancellation, cancellation occurs only after the target thread has
checked a flag to determine whether or not it should be canceled.

3. Signal Handling
A signal is used to notify a process that a particular event has occurred. A signal may be
handled by one of two possible handlers:
A default signal handler
A user-defined signal handler
Every signal has a default signal handler that is run by the kernel. Signals are handled in
different ways. Some signals (such as changing the size of a window) are simply ignored;
others (such as an illegal memory access) are handled by terminating the program.
Most multithreaded versions of UNIX allow a thread to specify which signals it will
accept and which it will block.

4. Thread Pool
In a multithreaded web server, whenever the server receives a request, it creates a
separate thread to service the request. However this degrades the performance of the

23
system because here we need the amount of time required to create the thread prior to
servicing the request, together with the fact that this thread will be discarded once it has
completed its work. Unlimited threads could exhaust system resources, such as CPU time
or memory.
The problem to this solution is to use a Thread Pool.
The general idea behind a thread pool is to create a number of threads at process startup
and place them into a pool, where they sit and wait for work. When a server receives a
request, it awakens a thread from this pool (if one is available) and passes it the request
for service. Once the thread completes its service, it returns to the pool and awaits more
work. If the pool contains no available thread, the server waits until one becomes free.

5. Thread-Specific Data
In some circumstances, each thread needs its own copy of certain data. We will call such
data thread-specific data. For example, in a transaction-processing system, we service
each transaction in a separate thread and, each transaction might be assigned a unique
identifier. To associate each thread with its unique identifier, we could use thread-specific
data.

24

You might also like