Process
Process
The short term scheduler selects among the process that are ready to execute
& allocates the CPU to one of them. The primary distinction between these
two schedulers is the frequency of their execution. The short-term scheduler
must select a new process for the CPU quite frequently. It must execute at
least one in 100ms. Due to the short duration of time between executions, it
must be very fast.
Advantages of FCFS
Here, are pros/benefits of using FCFS scheduling algorithm:
The simplest form of a CPU scheduling algorithm
Easy to program
First come first served
Disadvantages of FCFS
Here, are cons/ drawbacks of using FCFS scheduling algorithm:
It is a Non-Preemptive CPU scheduling algorithm, so after the process has
been allocated to the CPU, it will never release the CPU until it finishes
executing.
The Average Waiting Time is high.
Short processes that are at the back of the queue have to wait for the long
process at the front to finish.
Not an ideal technique for time-sharing systems.
Because of its simplicity, FCFS is not very efficient.
Non-Preemptive SJF
In non-preemptive scheduling, once the CPU cycle is allocated to process, the
process holds it till it reaches a waiting state or terminated.
Preemptive SJF
In Preemptive SJF Scheduling, jobs are put into the ready queue as they come. A
process with shortest burst time begins execution. If a process with even a shorter
burst time arrives, the current process is removed or preempted from execution,
and the shorter job is allocated CPU cycle.
Advantages of SJF
Here are the benefits/pros of using SJF method:
SJF is frequently used for long term scheduling.
It reduces the average waiting time over FIFO (First in First Out) algorithm.
SJF method gives the lowest average waiting time for a specific set of
processes.
It is appropriate for the jobs running in batch, where run times are known in
advance.
For the batch system of long-term scheduling, a burst time estimate can be
obtained from the job description.
For Short-Term Scheduling, we need to predict the value of the next burst
time.
Probably optimal with regard to average turnaround time.
Disadvantages/Cons of SJF
Here are some drawbacks/cons of SJF algorithm:
Job completion time must be known earlier, but it is hard to predict.
It is often used in a batch system for long term scheduling.
SJF can’t be implemented for CPU scheduling for the short term. It is
because there is no specific method to predict the length of the upcoming
CPU burst.
This algorithm may cause very long turnaround times or starvation.
Requires knowledge of how long a process or job will run.
It leads to the starvation that does not reduce average turnaround time.
It is hard to know the length of the upcoming CPU request.
Elapsed time should be recorded, that results in more overhead on the
processor.
Priority Scheduling
Priority Scheduling is a method of scheduling processes that is based on priority.
In this algorithm, the scheduler selects the tasks to work as per the priority.
The processes with higher priority should be carried out first, whereas jobs with
equal priorities are carried out on a round-robin or FCFS basis. Priority depends
upon memory requirements, time requirements, etc.
Round-Robin Scheduling
The name of this algorithm comes from the round-robin principle, where each
person gets an equal share of something in turns. It is the oldest, simplest
scheduling algorithm, which is mostly used for multitasking.
In Round-robin scheduling, each ready task runs turn by turn only in a cyclic queue
for a limited time slice. This algorithm also offers starvation free execution of
processes.
Thread
Thread is an execution unit that is part of a process. A process can have multiple
threads, all executing at the same time. It is a unit of execution in concurrent
programming. A thread is lightweight and can be managed independently by a
scheduler. It helps you to improve the application performance using parallelism.
Multiple threads share information like data, code, files, etc. We can implement
threads in three different ways:
1. Kernel-level threads
2. User-level threads
3. Hybrid threads
Properties of Thread
Here are important properties of Thread:
Single system call can create more than one thread
Threads share data and information.
Threads shares instruction, global, and heap regions. However, it has its
register and stack.
Thread management consumes very few, or no system calls because of
communication between threads that can be achieved using shared memory.
MULTITHREADED PROGRAMMING
OVERVIEW
A thread is a basic unit of CPU utilization; it comprises a thread ID, a program
counter, a register set,and a stack. It shares with other threads belonging to the
same process its code section, data section,and other operating-system resources,
such as open files and signals. A traditional (or heavyweight)process has a single
thread of control. If a process has multiple threads of control, it can perform
morethan one task at a time. The following figure illustrates the difference between
a traditional single-threaded process and a multithreaded process.
Motivation
Many software packages that run on modern desktop PCs are multithreaded. An
applicationtypically is implemented as a separate process with several threads of
control. A web browser mighthave one thread display images or text while another
thread retrieves data from the network, forexample. A word processor may have a
thread for displaying graphics, another thread for respondingto keystrokes from the
user, and a third thread for performing spelling and grammar checking
in thebackground.In certain situations, a single application may be required to
perform several similar tasks. Forexample, a web server accepts client requests for
web pages, images, sound, and so forth. A busy webserver may have several
(perhaps thousands) of clients concurrently accessing it. If the web server ranas a
traditional single-threaded process, it would be able to service only one client at a
time. Theamount of time that a client might have to wait for its request to be
serviced could be enormous.One solution is to have the server run as a single
process that accepts requests. When the serverreceives a request, it creates a
separate process to service that request. But process creation is timeconsuming and
resource intensive. It is generally more efficient to use one process
that containsmultiple threads. This approach would multithread the web-server
process. The server would create aseparate thread that would listen for client
requests; when a request was made, rather than creatinganother process, the server
would create another thread to service the request.Threads also play a vital role in
remote procedure call (RPC) systems. RPCs allow interprocesscommunication by
providing a communication mechanism similar to ordinary function or
procedurecalls. Typically, RPC servers are multithreaded. When a server receives a
message, it services themessage using a separate thread. This allows the server to
service several concurrent requests.Finally, many operating system kernels are
now multithreaded; several threads operate in thekernel, and each thread performs
a specific task, such as managing devices or interrupt handling.
Benefits
The benefits of multithreaded programming can be broken down into four major
categories:1.
Responsiveness
. Multithreading an interactive application may allow a program to continue
runningeven if part of it is blocked or is performing a lengthy operation, thereby
increasing responsiveness tothe user. For instance, a multithreaded web browser
could still allow user interaction in one threadwhile an image was being loaded in
another thread.2.
Resource sharing
. By default, threads share the memory and the resources of the process
to whichthey belong. The benefit of sharing code and data is that it allows an
application to have severaldifferent threads of activity within the same address
space.3.
Economy
. Allocating memory and resources for process creation is costly. Because threads
shareresources of the process to which they belong, it is more economical to create
and context-switchthreads. Empirically gauging the difference in overhead can be
difficult, but in general it is muchmore time consuming to create and manage
processes than threads. In Solaris, for example, creating aprocess is about thirty
times slower than is creating a thread, and context switching is about five times
slower
User Level Threads
• In this case, the thread management kernel is not aware of the existence of
threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and for
saving and restoring thread contexts. The application starts with a single thread.
Advantages • Thread switching does not require Kernel mode privileges. • User
level thread can run on any operating system. • Scheduling can be application
specific in the user level thread. • User level threads are fast to create and manage.
One-To-One Model
• The one-to-one model creates a separate kernel thread to handle each and every
user thread.
Most implementations of this model place a limit on how many threads can be
created.
• Linux and Windows from 95 to XP implement the one-to-one model for threads
Many-To-Many Model
• The many-to-many model multiplexes any number of user threads onto an equal
or smaller number of kernel threads, combining the best features of the one-toone
and many-to-one models.
• Users can create any number of the threads.
• Blocking the kernel system calls does not block the entire process.
• Processes can be split across multiple processors.
Thread Libraries
Thread libraries provides programmers with API for creating and managing
of threads.
• Thread libraries may be implemented either in user space or in kernel
space. The user space involves API functions implemented solely within user
space, with no kernel support. The kernel space involves system calls, and requires
a kernel with thread library support.
Benefits of Multithreading
• Responsiveness
• Resource sharing, hence allowing better utilization of resources.
• Economy. Creating and managing threads becomes easier.
• Scalability. One thread runs on one CPU. In Multithreaded processes, threads can
be distributed over a series of processors to scale.
• Context Switching is smooth. Context switching refers to the procedure followed
by CPU to change from one task to another.