0% found this document useful (0 votes)
38 views

Process

The document discusses processes and process management in an operating system. It defines what a process is, describes process states and scheduling queues. It also explains process control blocks and different types of schedulers including long term, short term and medium term schedulers. Finally, it discusses various CPU scheduling algorithms like FCFS, SJF and priority scheduling.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Process

The document discusses processes and process management in an operating system. It defines what a process is, describes process states and scheduling queues. It also explains process control blocks and different types of schedulers including long term, short term and medium term schedulers. Finally, it discusses various CPU scheduling algorithms like FCFS, SJF and priority scheduling.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

Process:

A process or task is an instance of a program in execution. The execution of a


process must programs in a sequential manner. At any time at most one instruction
is executed. The process includes the current activity as represented by the value of
the program counter and the content of the processors registers. Also it includes the
process stack which contain temporary data (such as method parameters return
address and local variables) & a data section which contain global variables.
Process state:
As a process executes, it changes state. The state of a process is defined by the
correct activity of that process. Each process may be in one of the following states.
• New: The process is being created.
• Ready: The process is waiting to be assigned to a processor.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur.
• Terminated: The process has finished execution
Many processes may be in ready and waiting state at the same time. But only one
process can be running on any processor at any instant.

Differences between Process and Program


Process Control Block:
Each process is represented in the operating System by a Process Control Block. It
is also called Task Control Block. It contains many pieces of information
associated with a specific Process.
A process control block contains many pieces of information associated with a
specific process. It includes the following informations.
• Process state: The state may be new, ready, running, waiting or terminated state.
• Program counter:it indicates the address of the next instruction to be executed
for this purpose.
• CPU registers: The registers vary in number & type depending on the computer
architecture. It includes accumulators, index registers, stack pointer & general
purpose registers, plus any condition- code information must be saved when an
interrupt occurs to allow the process to be continued correctly after- ward.
• CPU scheduling information:This information includes process priority
pointers to scheduling queues & any other scheduling parameters.
• Memory management information: This information may include such
information as the value of the bar & limit registers, the page tables or the segment
tables, depending upon the memory system used by the operating system.
• Accounting information: This information includes the amount of CPU and real
time used, time limits, account number, job or process numbers and so on.
• I/O Status Information: This information includes the list of I/O devices
allocated to this process, a list of open files and so on. The PCB simply serves as
the repository for any information that may vary from process to process.
Process scheduling:
Scheduling is a fundamental function of OS. When a computer is
multiprogrammed, it has multiple processes completing for the CPU at the same
time. If only one CPU is available, then a choice has to be made regarding which
process to execute next. This decision making process is known as scheduling and
the part of the OS that makes this choice is called a scheduler. The algorithm it
uses in making this choice is called scheduling algorithm.
Scheduling Objectives
 Maximize throughput.
 Maximize number of users receiving acceptable response times.
 Be predictable.
 Balance resource use.
 Avoid indefinite postponement.
 Enforce Priorities.
 Give preference to processes holding key resources
SCHEDULING QUEUES:
people live in rooms. Process are present in rooms knows as queues. There are
3types
1. job queue: when processes enter the system, they are put into a job queue, which
consists all processes in the system. Processes in the job queue reside on mass
storage and await the allocation of main memory.
2. ready queue: if a process is present in main memory and is ready to be allocated
to cpu forexecution, is kept in readyqueue.
3. device queue: if a process is present in waiting state (or) waiting for an i/o event
to complete is said to bein device queue.(or) The processes waiting for a particular
I/O device is called device queue.
As processes enter the system they are put into a job queue.this queue consists of
all process in the system. The process that are residing in main memory and are
ready & waiting to execute or kept on a list called ready queue.
This queue is generally stored as a linke final PCB in the list. The PCB includes a
pointer field that points to the next PCB in the ready queue. The lists of processes
waiting for a particular I/O device are kept on a list called devic queue. Each
device has its own device queue. A new process is initially put in the ready queue.
It waits in the ready queue until it is selected for execution & is given the CPU .
SCHEDULERS:
A process migrates between the various scheduling queues throughout its life-time
purposes. The OS must select for scheduling processes from these queues in some
fashion. This selection process is carried out by the appropriate scheduler. In a
batch system, more processes are submittedand then executed immediately. So
these processes are spooled to a mass storage device like disk, where they are kept

for later execution.


Types of schedulers:
There are 3 types of schedulers mainly used:
1. Long term scheduler:
Long term scheduler selects process from the disk & loads them into memory
for execution. It controls the degreeof multi-programming i.e. no. of processes
in memory. It executes less frequently than other schedulers. If the degree of
multiprogramming is stable than the average rate of process creation is equal to
the average departure rate of processes leaving the system. So, the long term
scheduler is needed to be invoked only when a process leaves the system. Due
to longer intervals between executions it can afford to take more time to decide
which process should be selected for execution. Most processes in the CPU are
either I/O bound or CPU bound. An I/O bound process (an interactive ‘C’
program is one that spends most of its time in I/O operation than it spends in
doing I/O operation. A CPU bound process is one that spends more of its time
in doing computations than I/O operations (complex sorting program). It is
important that the long term scheduler should select a good mix of I/O bound &
CPU bound processes.

2. Short - term scheduler:

The short term scheduler selects among the process that are ready to execute
& allocates the CPU to one of them. The primary distinction between these
two schedulers is the frequency of their execution. The short-term scheduler
must select a new process for the CPU quite frequently. It must execute at
least one in 100ms. Due to the short duration of time between executions, it
must be very fast.

3. Medium - term scheduler:


some operating systems introduce an additional intermediate level of
scheduling known as medium - term scheduler. The main idea behind this
scheduler is that sometimes it is advantageous to remove processes from
memory & thus reduce the degree of multiprogramming. At some later time,
the process can be reintroduced into memory & its execution can be
continued from where it had left off. This is called as swapping. The process
is swapped out & swapped in later by medium term scheduler. Swapping is
necessary to improve theprocess miss or due to some change in memory
requirements, the available memory limit is exceeded which requires some
memory to be freed up.
SCHEDULING CRITERIA:
1. Throughput: how many jobs are completed by the cpu with in a timeperiod.
2. Turn around time : The time interval between the submission of the process and
time of the completion is turn around time.
3. Waiting time: The time spent by the process to wait for cpu to beallocated.
4. Response time: Time duration between the submission and firstresponse.
5. Cpu Utilization: CPU is costly device, it must be kept as busy aspossible. Eg:
CPU efficiency is 90% means it is busy for 90 units, 10 units idle.
6. Context Switch: Assume, main memory contains more than one process. If cpu
is executing a process, if time expires or if a high priority process enters into main
memory, then the scheduler saves information about current process in the PCB
and switches to execute the another process. The concept of moving CPU by
scheduler from one process to other process is known as context switch.
7. Non-Preemptive Scheduling: CPU is assigned to one process, CPU do not
release until the competition of that process. The CPU will assigned to some other
process only after the previous process has finished.
8. Preemptive scheduling: here CPU can release the processes even in the middle
of the execution. CPU received a signal from process p2. OS compares the
priorities of p1 ,p2. If p1>p2, CPU continues the execution of p1. If p1<p2 CPU
preempt p1 and assigned to p2.
9. Dispatcher: The main job of dispatcher is switching the cpu from one process to
another
process. Dispatcher connects the cpu to the process selected by the short term
scheduler.
10. Dispatcher latency: The time it takes by the dispatcher to stop one process and
start another
process is known as dispatcher latency. If the dispatcher latency is increasing, then
the degree of
multiprogramming decreases.
CPU Scheduling Algorithm:
CPU Scheduling deals with the problem of deciding which of the processes in the
ready queue is to be allocated first to the CPU. There are four types of CPU
scheduling that exist.
1. First come First served scheduling: (FCFS):
The process that request the CPU first is holds the cpu first. If a process request
the cpu then it is loaded into the ready queue, connect CPU to that process.
Consider the following set of processes that arrive at time 0, the length of the
cpu burst time given in milli seconds. burst time is the time, required the cpu to
execute that job, it is in milli seconds.

Characteristics of FCFS method


 It supports non-preemptive and pre-emptive scheduling algorithm.
 Jobs are always executed on a first-come, first-serve basis.
 It is easy to implement and use.
 This method is poor in performance, and the general wait time is quite high.

Advantages of FCFS
Here, are pros/benefits of using FCFS scheduling algorithm:
 The simplest form of a CPU scheduling algorithm
 Easy to program
 First come first served

Disadvantages of FCFS
Here, are cons/ drawbacks of using FCFS scheduling algorithm:
 It is a Non-Preemptive CPU scheduling algorithm, so after the process has
been allocated to the CPU, it will never release the CPU until it finishes
executing.
 The Average Waiting Time is high.
 Short processes that are at the back of the queue have to wait for the long
process at the front to finish.
 Not an ideal technique for time-sharing systems.
 Because of its simplicity, FCFS is not very efficient.

Shortest Job First Scheduling


Shortest Job First (SJF) is an algorithm in which the process having the smallest
execution time is chosen for the next execution. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the average waiting time for
other processes awaiting execution. The full form of SJF is Shortest Job First.
There are basically two types of SJF methods:
 Non-Preemptive SJF
 Preemptive SJF

Characteristics of SJF Scheduling


 It is associated with each job as a unit of time to complete.
 This algorithm method is helpful for batch-type processing, where waiting
for jobs to complete is not critical.
 It can improve process throughput by making sure that shorter jobs are
executed first, hence possibly have a short turnaround time.
 It improves job output by offering shorter jobs, which should be executed
first, which mostly have a shorter turnaround time.

Non-Preemptive SJF
In non-preemptive scheduling, once the CPU cycle is allocated to process, the
process holds it till it reaches a waiting state or terminated.

Preemptive SJF
In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. A
process with shortest burst time begins execution. If a process with even a shorter
burst time arrives, the current process is removed or preempted from execution,
and the shorter job is allocated CPU cycle.

Advantages of SJF
Here are the benefits/pros of using SJF method:
 SJF is frequently used for long term scheduling.
 It reduces the average waiting time over FIFO (First in First Out) algorithm.
 SJF method gives the lowest average waiting time for a specific set of
processes.
 It is appropriate for the jobs running in batch, where run times are known in
advance.
 For the batch system of long-term scheduling, a burst time estimate can be
obtained from the job description.
 For Short-Term Scheduling, we need to predict the value of the next burst
time.
 Probably optimal with regard to average turnaround time.

Disadvantages/Cons of SJF
Here are some drawbacks/cons of SJF algorithm:
 Job completion time must be known earlier, but it is hard to predict.
 It is often used in a batch system for long term scheduling.
 SJF can’t be implemented for CPU scheduling for the short term. It is
because there is no specific method to predict the length of the upcoming
CPU burst.
 This algorithm may cause very long turnaround times or starvation.
 Requires knowledge of how long a process or job will run.
 It leads to the starvation that does not reduce average turnaround time.
 It is hard to know the length of the upcoming CPU request.
 Elapsed time should be recorded, that results in more overhead on the
processor.

Priority Scheduling
Priority Scheduling is a method of scheduling processes that is based on priority.
In this algorithm, the scheduler selects the tasks to work as per the priority.
The processes with higher priority should be carried out first, whereas jobs with
equal priorities are carried out on a round-robin or FCFS basis. Priority depends
upon memory requirements, time requirements, etc.

Types of Priority Scheduling


Priority scheduling divided into two main types:
Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities.
Sometimes it is important to run a task with a higher priority before another lower
priority task, even if the lower priority task is still running. The lower priority task
holds for some time and resumes when the higher priority task finishes its
execution.
Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific
process. The process that keeps the CPU busy, will release the CPU either by
switching context or terminating. It is the only method that can be used for various
hardware platforms. That’s because it doesn’t need special hardware (for example,
a timer) like preemptive scheduling.

Characteristics of Priority Scheduling


 A CPU algorithm that schedules processes based on priority.
 It used in Operating systems for performing batch processes.
 If two jobs having the same priority are READY, it works on a FIRST
COME, FIRST SERVED basis.
 In priority scheduling, a number is assigned to each process that indicates its
priority level.
 Lower the number, higher is the priority.
 In this type of scheduling algorithm, if a newer process arrives, that is
having a higher priority than the currently running process, then the
currently running process is preempted.

Advantages of priority scheduling


Here, are benefits/pros of using priority scheduling method:
 Easy to use scheduling method
 Processes are executed on the basis of priority so high priority does not need
to wait for long which saves time
 This method provides a good mechanism where the relative important of
each process may be precisely defined.
 Suitable for applications with fluctuating time and resource requirements.

Disadvantages of priority scheduling


Here, are cons/drawbacks of priority scheduling
 If the system eventually crashes, all low priority processes get lost.
 If high priority processes take lots of CPU time, then the lower priority
processes may starve and will be postponed for an indefinite time.
 This scheduling algorithm may leave some low priority processes waiting
indefinitely.
 A process will be blocked when it is ready to run but has to wait for the CPU
because some other process is running currently.
 If a new higher priority process keeps on coming in the ready queue, then
the process which is in the waiting state may need to wait for a long duration
of time.

Round-Robin Scheduling
The name of this algorithm comes from the round-robin principle, where each
person gets an equal share of something in turns. It is the oldest, simplest
scheduling algorithm, which is mostly used for multitasking.
In Round-robin scheduling, each ready task runs turn by turn only in a cyclic queue
for a limited time slice. This algorithm also offers starvation free execution of
processes.

Characteristics of Round-Robin Scheduling


Here are the important characteristics of Round-Robin Scheduling:
 Round robin is a pre-emptive algorithm
 The CPU is shifted to the next process after fixed interval time, which is
called time quantum/time slice.
 The process that is preempted is added to the end of the queue.
 Round robin is a hybrid model which is clock-driven
 Time slice should be minimum, which is assigned for a specific task that
needs to be processed. However, it may differ OS to OS.
 It is a real time algorithm which responds to the event within a specific time
limit.
 Round robin is one of the oldest, fairest, and easiest algorithm.
 Widely used scheduling method in traditional OS.

Advantage of Round-robin Scheduling


Here, are pros/benefits of Round-robin scheduling method:
 It doesn’t face the issues of starvation or convoy effect.
 All the jobs get a fair allocation of CPU.
 It deals with all process without any priority
 If you know the total number of processes on the run queue, then you can
also assume the worst-case response time for the same process.
 This scheduling method does not depend upon burst time. That’s why it is
easily implementable on the system.
 Once a process is executed for a specific set of the period, the process is
preempted, and another process executes for that given time period.
 Allows OS to use the Context switching method to save states of preempted
processes.
 It gives the best performance in terms of average response time.

Disadvantages of Round-robin Scheduling


Here, are drawbacks/cons of using Round-robin scheduling:
 If slicing time of OS is low, the processor output will be reduced.
 This method spends more time on context switching
 Its performance heavily depends on time quantum.
 Priorities cannot be set for the processes.
 Round-robin scheduling doesn’t give special priority to more important
tasks.
 Decreases comprehension
 Lower time quantum results in higher the context switching overhead in the
system.
 Finding a correct time quantum is a quite difficult task in this system.

Thread
Thread is an execution unit that is part of a process. A process can have multiple
threads, all executing at the same time. It is a unit of execution in concurrent
programming. A thread is lightweight and can be managed independently by a
scheduler. It helps you to improve the application performance using parallelism.
Multiple threads share information like data, code, files, etc. We can implement
threads in three different ways:
1. Kernel-level threads
2. User-level threads
3. Hybrid threads

Properties of Thread
Here are important properties of Thread:
 Single system call can create more than one thread
 Threads share data and information.
 Threads shares instruction, global, and heap regions. However, it has its
register and stack.
 Thread management consumes very few, or no system calls because of
communication between threads that can be achieved using shared memory.

MULTITHREADED PROGRAMMING
OVERVIEW
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program
counter, a register set,and a stack. It shares with other threads belonging to the
same process its code section, data section,and other operating-system resources,
such as open files and signals. A traditional (or heavyweight)process has a single
thread of control. If a process has multiple threads of control, it can perform
morethan one task at a time. The following figure illustrates the difference between
a traditional single-threaded process and a multithreaded process.

Motivation
Many software packages that run on modern desktop PCs are multithreaded. An
applicationtypically is implemented as a separate process with several threads of
control. A web browser mighthave one thread display images or text while another
thread retrieves data from the network, forexample. A word processor may have a
thread for displaying graphics, another thread for respondingto keystrokes from the
user, and a third thread for performing spelling and grammar checking
in thebackground.In certain situations, a single application may be required to
perform several similar tasks. Forexample, a web server accepts client requests for
web pages, images, sound, and so forth. A busy webserver may have several
(perhaps thousands) of clients concurrently accessing it. If the web server ranas a
traditional single-threaded process, it would be able to service only one client at a
time. Theamount of time that a client might have to wait for its request to be
serviced could be enormous.One solution is to have the server run as a single
process that accepts requests. When the serverreceives a request, it creates a
separate process to service that request. But process creation is timeconsuming and
resource intensive. It is generally more efficient to use one process
that containsmultiple threads. This approach would multithread the web-server
process. The server would create aseparate thread that would listen for client
requests; when a request was made, rather than creatinganother process, the server
would create another thread to service the request.Threads also play a vital role in
remote procedure call (RPC) systems. RPCs allow interprocesscommunication by
providing a communication mechanism similar to ordinary function or
procedurecalls. Typically, RPC servers are multithreaded. When a server receives a
message, it services themessage using a separate thread. This allows the server to
service several concurrent requests.Finally, many operating system kernels are
now multithreaded; several threads operate in thekernel, and each thread performs
a specific task, such as managing devices or interrupt handling.

Benefits
The benefits of multithreaded programming can be broken down into four major
categories:1.
Responsiveness
. Multithreading an interactive application may allow a program to continue
runningeven if part of it is blocked or is performing a lengthy operation, thereby
increasing responsiveness tothe user. For instance, a multithreaded web browser
could still allow user interaction in one threadwhile an image was being loaded in
another thread.2.
Resource sharing
. By default, threads share the memory and the resources of the process
to whichthey belong. The benefit of sharing code and data is that it allows an
application to have severaldifferent threads of activity within the same address
space.3.
Economy
. Allocating memory and resources for process creation is costly. Because threads
shareresources of the process to which they belong, it is more economical to create
and context-switchthreads. Empirically gauging the difference in overhead can be
difficult, but in general it is muchmore time consuming to create and manage
processes than threads. In Solaris, for example, creating aprocess is about thirty
times slower than is creating a thread, and context switching is about five times
slower
User Level Threads
• In this case, the thread management kernel is not aware of the existence of
threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and for
saving and restoring thread contexts. The application starts with a single thread.
Advantages • Thread switching does not require Kernel mode privileges. • User
level thread can run on any operating system. • Scheduling can be application
specific in the user level thread. • User level threads are fast to create and manage.

Kernel Level Threads


• In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by
the operating system. Any application can be programmed to be multithreaded. All
of the threads within an application are supported within a single process.
• The Kernel maintains context information for the process as a whole and for
individuals threads within the process. Scheduling by the Kernel is done on a
thread basis. The Kernel performs thread creation, scheduling and management in
Kernel space. Kernel threads are generally slower to create and manage than the
user threads.
Advantages
• Kernel can simultaneously schedule multiple threads from the same process on
multiple processes. • If one thread in a process is blocked, the Kernel can schedule
another thread of the same process.
• Kernel routines themselves can be multithreaded.
Disadvantages
• Kernel threads are generally slower to create and manage than the user threads. •
Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
• Some operating system provide a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a
combined system, multiple threads within the same application can run in parallel
on multiple processors and a blocking system call need not block the entire
process.
Multithreading models are three types • Many to many relationship. • Many to one
relationship. • One to one relationship.
Many-To-One Model
• In the many-to-one model, many user-level threads are all mapped onto a single
kernel thread.
• Thread management is handled by the thread library in user space, which is
efficient in nature.

One-To-One Model
• The one-to-one model creates a separate kernel thread to handle each and every
user thread.
Most implementations of this model place a limit on how many threads can be
created.
• Linux and Windows from 95 to XP implement the one-to-one model for threads
Many-To-Many Model
• The many-to-many model multiplexes any number of user threads onto an equal
or smaller number of kernel threads, combining the best features of the one-toone
and many-to-one models.
• Users can create any number of the threads.
• Blocking the kernel system calls does not block the entire process.
• Processes can be split across multiple processors.

Thread Libraries
 Thread libraries provides programmers with API for creating and managing
of threads.
• Thread libraries may be implemented either in user space or in kernel
space. The user space involves API functions implemented solely within user
space, with no kernel support. The kernel space involves system calls, and requires
a kernel with thread library support.
Benefits of Multithreading
• Responsiveness
• Resource sharing, hence allowing better utilization of resources.
• Economy. Creating and managing threads becomes easier.
• Scalability. One thread runs on one CPU. In Multithreaded processes, threads can
be distributed over a series of processors to scale.
• Context Switching is smooth. Context switching refers to the procedure followed
by CPU to change from one task to another.

You might also like