0% found this document useful (0 votes)
5 views142 pages

OS Unit 1

The document outlines the vision and mission of the Computer Science and Engineering department at TECHNO NJR Institute of Technology, Udaipur, emphasizing the development of competent engineers through outcome-based education and industry interaction. It details the program outcomes and specific outcomes related to operating systems, including knowledge application, problem analysis, and ethical responsibilities. Additionally, it provides a comprehensive syllabus and lecture plan for the Operating System course, covering topics such as memory management, deadlock handling, and various operating system types.

Uploaded by

pofomax827
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views142 pages

OS Unit 1

The document outlines the vision and mission of the Computer Science and Engineering department at TECHNO NJR Institute of Technology, Udaipur, emphasizing the development of competent engineers through outcome-based education and industry interaction. It details the program outcomes and specific outcomes related to operating systems, including knowledge application, problem analysis, and ethical responsibilities. Additionally, it provides a comprehensive syllabus and lecture plan for the Operating System course, covering topics such as memory management, deadlock handling, and various operating system types.

Uploaded by

pofomax827
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 142

TECHNO NJR INSTITUTE OF TECHNOLOGY,UDAIPUR

Year & Sem – III year & V Sem


Subject – Operating System
Unit – I
Presented by – Er.Pushpendra Singh Chundawat Professor ,CSE

Er.pushpendra singh chundawat 1 1


VISION AND MISSION OF DEPARTMENT
VISION:
To become renowned Centre of excellence in computer science and
engineering and make competent engineers & professionals with high
ethical values prepared for lifelong learning.

MISSION:
M1: To impart outcome based education for emerging technologies in the
field of computer science and engineering.
M2: To provide opportunities for interaction between academia and
industry.
M3: To provide platform for lifelong learning by accepting the change in
technologies
M4: To develop aptitude of fulfilling social responsibilities

Er.pushpendra singh chundawat


1 2
PROGRAM OUTCOMES
Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals and Computer
Science and Engineering specialization to the solution of complex Computer Science and Engineering problems.
Problem analysis: Identify, formulate, research literature, and analyse complex Computer Science and Engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering
sciences.
Design/development of solutions: Design solutions for complex Computer Science and Engineering problems and
design system components or processes that meet the specified needs with appropriate consideration for the public
health and safety, and the cultural, societal, and environmental considerations.
Conduct investigations of complex problems: Use research-based knowledge and research methods including design
of Computer Science and Engineering experiments, analysis and interpretation of data, and synthesis of the information
to provide valid conclusions.
Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering and IT tools
including prediction and modelling to complex Computer Science Engineering activities with an understanding of the
limitations.
The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal, health, safety,
legal and cultural issues and the consequent responsibilities relevant to the professional Computer Science and
Engineering practice.

Er.pushpendra singh chundawat


1 3
Cntd…
Environment and sustainability: Understand the impact of the professional Computer Science and Engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.
Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the Computer
Science and Engineering practice.
Individual and team work: Function effectively as an individual, and as a member or leader in diverse teams, and in
multidisciplinary settings in Computer Science and Engineering.
Communication: Communicate effectively on complex Computer Science and Engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and receive clear instructions.
Project management and finance: Demonstrate knowledge and understanding of the Computer Science and
Engineering and management principles and apply these to one’s own work, as a member and leader in a team, to
manage projects and in multidisciplinary environments.
Life-long learning: Recognize the need for, and have the preparation and ability to engage in independent and life-
long learning in the broadest context of technological change in Computer Science and Engineering.

Er.pushpendra singh chundawat


1 4
PROGRAM SPECIFIC OUTCOMES
PSO1: Ability to interpret and analyze network specific, cyber
security issues, automation in real world environment.
PS2: Ability to design and develop mobile and web-based
applications under realistic constraints.

Er.pushpendra singh chundawat


1 5
CO of Operating System
1. Demonstrate the concepts, structure design of operating system
and analysis of process management.
2. Recognize the concepts, implementation of memory
management policies, design issues of paging and virtual
memory.
3. Understand and design the concepts of deadlock handling and
device management.
4. Analyze the file system structure, implementation process and
acquainted with various types of operating systems.

Er.pushpendra singh chundawat 1 6


CO-PO& CO-PSO Mapping
SEM :5 Subject Code:5CS4-03 Subject: Operating System

CO, PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2
PO,PS
O

CO1 3 2 2 2 2 1 1 1 1 1 1 3 1 2

CO2 3 3 3 3 2 1 1 1 1 2 1 3 2 2

CO3 3 3 3 2 3 1 1 1 1 2 1 3 2 2

CO4 3 3 3 3 2 1 1 1 2 2 2 3 2 2

Er.pushpendra singh chundawat 1 7


Syllabus

Er.pushpendra singh chundawat 1 8


Lecture Plan
UNIT MAIN TOPIC NO. OF UNIT MAIN TOPIC NO. OF LEC.s
LEC.s

1. Introduction: Objective, scope and outcome of the course. 1 4. File management: file concept 1
Introduction and History of Operating systems: Structure and operations; 1 types and structures, directory structure 2
processes and files Processor management

inter process communication, mutual exclusion, semaphores 1 cases studies 2


wait and signal procedures, process scheduling and algorithms 1 access methods and matrices 1
critical sections, threads, multithreading 1 file security, user authentication 1
2. Memory management: contiguous memory allocation 1 5. UNIX and Linux operating systems as case 1
studies
virtual memory, paging 1 Time OS Introduction 1
page table structure, demand paging 1 RTS work Procedure 1
replacement policies, thrashing 1 RTS application area A study 1
segmentation, case study 1 Case studies of Mobile OS 1
Android OS 1
3. Deadlock: Shared resources 1 IOS 1
resource allocation and scheduling 2 Security Issue on Different Time OS 1
resource graph models 2 Total 40
deadlock detection 1 Content beyond Curricula
deadlock avoidance 2 Multitasking, context switching
Buddy system, overlays
deadlock prevention algorithms 2
Fragmentation compaction
Device management: devices and their characteristics 1 Disk management
device drivers, device handling 1 IOS and KALI linux
disk scheduling algorithms 2
Device algorithm policies 1
Er.pushpendra singh chundawat 1 9
Introduction of OS

Goal of an Operating System:

• The fundamental goal of a Computer System is to execute user programs


and to make tasks easier.

• Various application programs along with hardware system are used to


perform this work.

• Definition of Operating System:


A software which manages and control the entire set of resources and
effectively utilize every part of a computer.
Er.pushpendra singh chundawat 1 10
Definition of OS

Operating System Definitions

■ Resource allocator – manages and allocates resources.

■ Control program – controls the execution of user programs and


operations of I/O devices .

■ Kernel – the one program running at all times (all else being application
programs).

Er.pushpendra singh chundawat 1 11


Introduction of OS
The figure shows how OS acts as a medium between hardware unit and
application programs.

Er.pushpendra singh chundawat 1 12


Introduction of OS
Computer System Components
1. Hardware – provides basic computing resources (CPU, memory, I/O
devices).

2. Operating system – controls and coordinates the use of the hardware


among the various application programs for the various users.

3. Applications programs – define the ways in which the system resources


are used to solve the computing problems of the users (compilers,
database systems, video games, business programs).

4. Users (people, machines, other computers).


Er.pushpendra singh chundawat 1 13
Need of Operating System:
1. Platform for Application programs

2. Managing Input-Output unit

3. Consistent user interface

4. Multitasking

Er.pushpendra singh chundawat 1 14


Functions of Operating System:
1. Memory Management 6. Control over System Performance

2. Processor Management 7. Job accounting

3. Device Management 8. Error detecting aids

4. File Management 9.Coordination between other s/w


and users
5. Security

Er.pushpendra singh chundawat 1 15


Operating Systems Structures
Just like any other software, the operating system code can be structured in
different ways. The following are some of the commonly used structures.

Simple/Monolithic Structure
In this case, the operating system code has no structure.
It is written for functionality and efficiency (in terms of time and space).
DOS and UNIX are examples of such systems.

Layered Approach
The modularization of a system can be done in many ways.
In the layered approach, the operating system is broken up into a number of layers
or levels each built on top of lower layer.
The bottom layer is the hardware; the highest layer is the user interface.
A typical OS layer consists of data structures and a set of routines that can be
invoked by higher-level layers.
1 16
Er.pushpendra singh chundawat
Virtual Machines
• The computer system is made up of layers.
• The hardware is the lowest level in all such systems.
• The kernel running at the next level uses the hardware instructions to
create a set of system call for use by outer layers.
• The system programs above the kernel are therefore able to use either
system calls or hardware instructions and in some ways these programs
do not differentiate between these two.
• System programs, in turn, treat the hardware and the system calls as
though they were both at the same level.
• In some systems, the application programs can call the system programs.
The application programs view everything under them in the hierarchy as
though the latter were part of the machine itself.
• This layered approach is taken to its logical conclusion in the concept of a
virtual machine (VM).
• The VM operating system for IBM systems is the best example of VM
concept.
1 17
Er.pushpendra singh chundawat
Virtual Machines Cont..
There are two primary advantages to using virtual machines:
first by completely protecting system resources the virtual machine
provides a robust level of security.
Second, the virtual machine allows system development to be done
without disrupting normal system operation.

Although the virtual machine concept is useful it is difficult to


implement.

Java Virtual Machine (JVM) loads, verifies and executes programs


that have been translated into Java Bytecode. VMWare can be run
on a Windows platform to create a virtual machine on which you can
install an operating of your choice, such as Linux. Virtual PC
software works in a similar fashion.
Er.pushpendra singh chundawat 1 18
Operating System Types
Single-user systems
A computer system that allows only one user to use the computer at a
given time is known as a single-user system.

The goals of such systems are maximizing user convenience and


responsiveness, instead of maximizing the utilization of the CPU and
peripheral devices.

Single-user systems use I/O devices such as keyboards, mouse, display


screens, scanners, and small printers. They can adopt technology
developed for larger operating systems.

They may run different types of operating systems, including DOS,


Windows, and MacOS. Linux and UNIX operating systems can also be run
in single-user mode. Er.pushpendra singh chundawat 1 19
Batch Systems
Early computers were large machines run from a console with card readers
and tape drives as input devices and line printers, tape drives, and card
punches as output devices.

The user did not interact directly with the system; instead, the user
prepared a job, (which consisted of the program, data, and some control
information about the nature of the job in the form of control cards) and
submitted this to the computer operator.

The job was in the form of punch cards, and at some later time, the output
was generated by the system. The output consisted of the result of the
program, as well as a dump of the final memory and register contents for
debugging.

Er.pushpendra singh chundawat 1 20


Batch Systems Cont..
To speed up processing, operators batched together jobs with similar
needs and ran them through the computer as a group. For example, all
FORTRAN programs were compiled one after the other.

The major task of such an operating system was to transfer control


automatically from one job to the next.

Such systems in which the user does not get to interact with his jobs and
jobs with similar needs are executed in a “batch”, one after the other, are
known as batch systems.

Digital Equipment Corporation’s VMS is an example of a batch operating


system.

Er.pushpendra singh chundawat 1 21


Multi-programmed Systems
Such systems organize jobs so that CPU always has one to execute.

In this way, CPU utilization is increased.

The operating system picks and executes from amongst the available jobs
in memory.

The job has to wait for some task such as an I/O operation to complete.

In a non-multi-programmed system CPU would sit idle while in case of


multiprogrammed system, the operating system simply switches to, and
executes another job.

Computer running excel and firefox browser simultaneously is an example.


Er.pushpendra singh chundawat 1 22
Time-sharing systems
These are multi-user and multi-process systems.

Multi-user means system allows multiple users simultaneously.

In this system, a user can run one or more processes at the same time.

Examples of time-sharing systems are UNIX, Linux, Windows server


editions.

Er.pushpendra singh chundawat 1 23


Real-time systems
Real time systems are used when strict time requirements are placed on
the operation of a processor or the flow of data.

These are used to control a device in a dedicated application.

For example, medical imaging system and scientific experiments.

Er.pushpendra singh chundawat 1 24


Examples of Operating System:
There are many types of operating system. Some most popular examples
of operating system are:

Unix Operating System

Unix was initially written in assembly language. Later on, it was replaced by
C, and Unix, rewritten in C and was developed into a large, complex family
of inter-related operating systems. The major categories include BSD, and
Linux.

“UNIX” is a trademark of The Open Group which licenses it for use with
any operating system that has been shown to conform to their definitions.
Er.pushpendra singh chundawat 1 25
Examples of Operating System Cont..
macOS
Mac-OS is developed by Apple Inc. and is available on all Macintosh
computers.
It was formerly called “Mac OS X” and later on “OS X”.
MacOS was developed in 1980s by NeXT and that company was
purchased by Apple in 1997.

Linux
Linux is Unix-like operating system and was developed without any Unix
code. Linux is open license model and code is available for study and
modification. It has superseded Unix on many platforms. Linux is
commonly used smartphones and smartwatches.
Er.pushpendra singh chundawat 1 26
Examples of Operating System Cont..
Microsoft Windows
Microsoft Windows is most popular and widely used operating system.
It was designed and developed by Microsoft Corporation.
The current version of operating system is Windows-10.
Microsoft Windows was first released in 1985.
In 1995, Windows 95 was released which only used MS-DOS as a
bootstrap.

Other operating systems


Various operating systems like OS/2, BeOS and some other operating
system which were developed over time are no longer used now.

Er.pushpendra singh chundawat 1 27


Program vs Process
A process is an instance of a program in execution.

Batch systems work in terms of "jobs".


Many modern process concepts are still expressed in terms of jobs,
( e.g. job scheduling ), and the two terms are often used interchangeably.

A process is a program in execution. For example, when we write a program in C or


C++ and compile it, the compiler creates binary code. The original code and binary
code are both programs. When we actually run the binary code, it becomes a
process.

A process is an ‘active’ entity, as opposed to a program, which is considered to be a


‘passive’ entity. A single program can create many processes when run multiple
times; for example, when we open a .exe or binary file multiple times, multiple
instances begin (multiple processes are created).
1 28
Er.pushpendra singh chundawat
What does a process look like in memory?
Text Section:A Process, sometimes known as the Text
Section, also includes the current activity represented by the
value of the Program Counter.

Data Section: Contains the global variable.

Heap Section: Dynamically allocated memory to process


during its run time.

Stack: The Stack contains the temporary data, such as


function parameters, returns addresses, and local variables.

1 29
Er.pushpendra singh chundawat
Cont..
Note that the stack and the heap start at opposite ends of the process's
free space and grow towards each other.

If they should ever meet, then either a stack overflow error will occur, or
else a call to new or malloc will fail due to insufficient memory available.

When processes are swapped out of memory and later restored,


additional information must also be stored and restored.

Key among them are the program counter and the value of all
program registers.

Er.pushpendra singh chundawat 1 30


Attributes or Characteristics of a Process
Process Id: A unique identifier assigned by the operating system

Process State: Can be ready, running, etc.

CPU registers: Like the Program Counter (CPU registers must be saved
and restored when a process is swapped in and out of CPU)

Accounts information: user and kernel CPU time consumed, account


numbers, limits, etc.

I/O status information: For example, devices allocated to the process,


open files, etc

Er.pushpendra singh chundawat 1 31


Cont
CPU scheduling information: For example, Priority (Different processes
may have different priorities, for example a short process may be assigned
a low priority in the shortest job first scheduling)

Memory-Management information - E.g. page tables or segment tables.

PCB(Process Control Block)

Er.pushpendra singh chundawat 1 32


Process State
Processes may be in one of 5 states
New - The process is in the stage of being
created.
Ready - The process has all the resources
available that it needs to run, but the CPU is not
currently working on this process's instructions.
Running - The CPU is working on this process's
instructions.
Waiting - The process cannot run at the moment,
because it is waiting for some resource to
become available or for some event to occur. For
example the process may be waiting for keyboard
input, disk access request, inter-process
messages, a timer to go off, or a child process to
finish.
Terminated - The process has completed.
Er.pushpendra singh chundawat 1 33
Threads
What is a Thread?
A thread is a path of execution within a process. A process can contain multiple
threads.

Process vs Thread?
The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces.

Threads are not independent of one another like processes are, and as a result
threads share with other threads their code section, data section, and OS resources
(like open files and signals).

But, like process, a thread has its own program counter (PC), register set, and stack
space.

Er.pushpendra singh chundawat 1 34


Process Scheduling
The two main objectives of the process scheduling system are to keep the
CPU busy at all times and to deliver "acceptable" response times for all
programs, particularly for interactive ones.

The process scheduler must meet these objectives by implementing


suitable policies for swapping processes in and out of the CPU.

( Note that these objectives can be conflicting. In particular, every time the
system steps in to swap processes it takes up time on the CPU to do so,
which is thereby "lost" from doing any useful productive work. )

Er.pushpendra singh chundawat 1 35


Importance of Process Scheduling
Early computer systems were monoprogrammed and, as a result,
scheduling was a non-issue.

For many current personal computers, which are definitely


multiprogrammed, there is in fact very rarely more than one runnable
process. As a result, scheduling is not critical.

For servers (or old mainframes), scheduling is indeed important and these
are the systems you should think of.

Er.pushpendra singh chundawat 1 36


Cont
Definition
The process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of
another process on the basis of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating


systems.

Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU
using time multiplexing.

Er.pushpendra singh chundawat 1 37


Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues.

The OS maintains a separate queue for each of the process states and
PCBs of all processes in the same execution state are placed in the same
queue.

When the state of a process is changed, its PCB is unlinked from its
current queue and moved to its new state queue.

Er.pushpendra singh chundawat 1 38


Cont..
The Operating System maintains the following important process
scheduling queues −

Job queue − This queue keeps all the processes in the system.

Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.

Device queues − The processes which are blocked due to unavailability of


an I/O device constitute this queue.

Er.pushpendra singh chundawat 1 39


Cont..
The OS can use different policies to manage each
queue (FIFO, Round Robin, Priority, etc.).

The OS scheduler determines how to move


processes between the ready and run queues
which can only have one entry per processor core
on the system;

In this diagram, it has been merged with the CPU.

Er.pushpendra singh chundawat 1 40


Two-State Process Model
Running
When a new process is created, it enters into the system as in the running state.

Not Running
Processes that are not running are kept in queue, waiting for their turn to execute.

Each entry in the queue is a pointer to a particular process.

Queue is implemented by using linked list.

Use of dispatcher is as follows.


When a process is interrupted, that process is transferred in the waiting queue. If
the process has completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.
Er.pushpendra singh chundawat 1 41
Process Scheduling
For now we are discussing the arcs connecting running↔ready in the
diagram on the right showing the various states of a process.

Medium term scheduling is discussed later as is disk-arm scheduling.

Naturally, the part of the OS responsible for (short-term, processor)


scheduling is called the (short-term, processor) scheduler

The algorithm used is called the (short-term, processor) scheduling


algorithm.

1 42
Er.pushpendra singh chundawat
Process Scheduling
1. New: Newly Created Process (or) being-created process.

2. Ready: After creation process moves to Ready state, i.e. the process is ready
for execution.

3. Run: Currently running process in CPU (only one process at a time can be
under execution in a single processor).

4. Wait (or Block): When a process requests I/O access.

5. Complete (or Terminated): The process completed its execution.

6. Suspended Ready: When the ready queue becomes full, some processes are
moved to suspended ready state

7. Suspended Block: When waiting queue becomes full.


1 43
Er.pushpendra singh chundawat
Context Switching
The process of saving the context of one process and loading the context of
another process is known as Context Switching.

In simple terms, it is like loading and unloading the process from running state to
ready state.

When does context switching happen?


1. When a high-priority process comes to ready state (i.e. with higher priority than
the running process)

2. An Interrupt occurs

3. User and kernel mode switch (It is not necessary though)

4. Preemptive CPU scheduling used.

1 44
Er.pushpendra singh chundawat
Context Switch vs Mode Switch
A mode switch occurs when CPU privilege level is changed, for example when a
system call is made or a fault occurs.

The kernel works in more a privileged mode than a standard user task.

If a user process wants to access things which are only accessible to the kernel, a
mode switch must occur.

The currently executing process need not be changed during a mode switch.

A mode switch typically occurs for a process context switch to occur. Only the kernel
can cause a context switch.

1 45
Er.pushpendra singh chundawat
CPU-Bound vs I/O-Bound Processes:
A CPU-bound process requires more CPU time or spends more time in the
running state.

An I/O-bound process requires more I/O time and less CPU time.

An I/O-bound process spends more time in the waiting state.

Er.pushpendra singh chundawat 1 46


Process Schedulers
Schedulers are special system software which handle process scheduling in
various ways.

Their main task is to select the jobs to be submitted into the system and to decide
which process to run.

Schedulers are of three types −

Long-Term Scheduler

Short-Term Scheduler

Medium-Term Scheduler

1 47
Er.pushpendra singh chundawat
Comparison among Scheduler

1 48
Er.pushpendra singh chundawat
Context Switching
• A context switch is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same
point at a later time.

• Using this technique, a context switcher enables multiple processes to share a single
CPU.

• Context switching is an essential part of a multitasking operating system features.

• When the scheduler switches the CPU from executing one process to execute
another, the state from the current running process is stored into the process control
block.

• After this, the state for the process to run next is loaded from its own PCB and used to
set the PC, registers, etc. At that point, the second process can start executing.

Er.pushpendra singh chundawat 1 49


Cont..
Context switches are computationally intensive since
register and memory state must be saved and restored.

To avoid the amount of context switching time, some


hardware systems employ two or more sets of processor
registers.

When the process is switched, the following information is


stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Er.pushpendra singh chundawat 1 50
OS Scheduling Algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based
on particular scheduling algorithms.

There are six popular process scheduling algorithms

First-Come, First-Served (FCFS) Scheduling

Shortest-Job-Next (SJN) Scheduling

Priority Scheduling

Shortest Remaining Time

Round Robin(RR) Scheduling

Multiple-Level Queues Scheduling


Er.pushpendra singh chundawat 1 51
Cont..
These algorithms are either non-preemptive or preemptive.

Non-preemptive algorithms are designed so that once a process enters the running
state, it cannot be preempted until it completes its allotted time,

Preemptive scheduling is based on priority where a scheduler may preempt a low


priority running process anytime when a high priority process enters into a ready state.

Er.pushpendra singh chundawat 1 52


First Come First Serve (FCFS)
Jobs are executed on first come, first serve basis.

It is a non-preemptive, pre-emptive scheduling algorithm.

Easy to understand and implement.

Its implementation is based on FIFO queue.

Poor in performance as average wait time is high.

Er.pushpendra singh chundawat 1 53


Cont..

Er.pushpendra singh chundawat 1 54


Shortest Job Next (SJN)
This is also known as shortest job first, or SJF

This is a non-preemptive, pre-emptive scheduling algorithm.

Best approach to minimize waiting time.

Easy to implement in Batch systems where required CPU time is known in


advance.

Impossible to implement in interactive systems where required CPU time is


not known.

The processer should know in advance how much time process will take.
Er.pushpendra singh chundawat 1 55
Cont

Er.pushpendra singh chundawat 1 56


Priority Based Scheduling
Priority scheduling is a non-preemptive algorithm and one of the most
common scheduling algorithms in batch systems.

Each process is assigned a priority. Process with highest priority is to be


executed first and so on.

Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements
or any other resource requirement.

Er.pushpendra singh chundawat 1 57


Cont..

Er.pushpendra singh chundawat 1 58


Shortest Remaining Time
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.

The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.

Impossible to implement in interactive systems where required CPU time is not known.

It is often used in batch environments where short jobs need to give preference.

Er.pushpendra singh chundawat 1 59


Round Robin Scheduling
Round Robin is the preemptive process scheduling algorithm.

Each process is provided a fix time to execute, it is called a quantum.

Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.

Context switching is used to save states of preempted processes.

Er.pushpendra singh chundawat 1 60


Round Robin Scheduling Cont..

Er.pushpendra singh chundawat 1 61


Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm.

They make use of other existing algorithms to group and schedule jobs with
common characteristics.

Multiple queues are maintained for processes with common characteristics.

Each queue can have its own scheduling algorithms.

Priorities are assigned to each queue.

For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound
jobs in another queue. The Process Scheduler then alternately selects jobs from
each queue and assigns them to the CPU based on the algorithm assigned to the
queue.
Er.pushpendra singh chundawat 1 62
CPU Scheduling in OS
Arrival Time: Time at which the process arrives in the ready queue.

Completion Time: Time at which process completes its execution.

Burst Time: Time required by a process for CPU execution.

Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time

Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time

1 63
Er.pushpendra singh chundawat
Comparison among Scheduling Algorithm
FCFS can cause long waiting times, especially when the first job takes too
much CPU time.

Both SJF and Shortest Remaining time first algorithms may cause
starvation. Consider a situation when the long process is there in the
ready queue and shorter processes keep coming.

If time quantum for Round Robin scheduling is very large, then it behaves
same as FCFS scheduling.

SJF is optimal in terms of average waiting time for a given set of


processes,i.e., average waiting time is minimum with this scheduling, but
problems are, how to know/predict the time of next job.

1 64
Er.pushpendra singh chundawat
What is Thread
A thread is a flow of execution through the process code, with its own program counter
that keeps track of which instruction to execute next, system registers which hold its
current working variables, and a stack which contains the execution history.

A thread shares with its peer threads few information like code segment, data segment
and open files. When one thread alters a code segment memory item, all other threads
see that.

A thread is also called a lightweight process.

Threads provide a way to improve application performance through parallelism.

Threads represent a software approach to improving performance of operating system


by reducing the overhead thread is equivalent to a classical process.

Er.pushpendra singh chundawat 1 65


Cont..
Each thread belongs to exactly one process and no thread can exist outside a process.

Each thread represents a separate flow of control.

Threads have been successfully used in implementing network servers and web server.

They also provide a suitable foundation for parallel execution of applications on shared
memory multiprocessors.

Er.pushpendra singh chundawat 1 66


Difference between Process and Thread
S. Process Thread
N.

1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process.

2 Process switching needs interaction with operating Thread switching does not need to interact with operating system.
system.

3 In multiple processing environments, each process All threads can share same set of open files, child processes.
executes the same code but has its own memory and
file resources.

4 If one process is blocked, then no other process can While one thread is blocked and waiting, a second thread in the
execute until the first process is unblocked. same task can run.

5 Multiple processes without using threads use more Multiple threaded processes use fewer resources.
resources.

6 In multiple processes each process operates One thread can read, write or change another thread's data.
independently of the others.

Er.pushpendra singh chundawat 1 67


Advantages of Thread and its types
Threads minimize the context switching time.

Use of threads provides concurrency within a process.

Efficient communication.

It is more economical to create and context switch threads.

Threads allow utilization of multiprocessor architectures to a greater scale and


efficiency.

Types of Thread

User Level Threads − User managed threads.

Kernel Level Threads − Operating System managed threads acting on kernel, an


operating system core.
Er.pushpendra singh chundawat 1 68
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads.

The thread library contains code for creating and destroying threads, for passing
message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts.

The application starts with a single thread.

Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.

Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Er.pushpendra singh chundawat 1 69
Kernel Level Threads
In this case, thread management is done by the Kernel.

There is no thread management code in the application area.

Kernel threads are supported directly by the operating system.

Any application can be programmed to be multithreaded.

All of the threads within an application are supported within a single process.

The Kernel maintains context information for the process as a whole and for individuals
threads within the process.

Scheduling by the Kernel is done on a thread basis. The Kernel performs thread
creation, scheduling and management in Kernel space. Kernel threads are generally
slower to create and manage than the user threads.

Er.pushpendra singh chundawat 1 70


Cont..
Advantages
Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.

If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.

Kernel routines themselves can be multithreaded.

Disadvantages
Kernel threads are generally slower to create and manage than the user threads.

Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Er.pushpendra singh chundawat 1 71
Multithreading Models
Some operating system provide a combined user level thread and Kernel
level thread facility.

Solaris is a good example of this combined approach.

In a combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block the
entire process.

Multithreading models are three types


Many to many relationship.
Many to one relationship.
One to one relationship.

Er.pushpendra singh chundawat 1 72


The many-to-many model multiplexes any number
Many to Many Model of user threads onto an equal or smaller number
of kernel threads.

In this threading model where 6 user level


threads are multiplexing with 6 kernel level
threads.

In this model, developers can create as many


user threads as necessary and the corresponding
Kernel threads can run in parallel on a
multiprocessor machine.

This model provides the best accuracy on


concurrency and when a thread performs a
blocking system call, the kernel can schedule
another thread for execution.
Er.pushpendra singh chundawat 1 73
Many-to-one model maps many user level threads
to one Kernel-level thread.
Many to One Model
Thread management is done in user space by the
thread library.

When thread makes a blocking system call, the


entire process will be blocked.

Only one thread can access the Kernel at a time,


so multiple threads are unable to run in parallel on
multiprocessors.

If the user-level thread libraries are implemented


in the operating system in such a way that the
system does not support them, then the Kernel
threads use the many-to-one relationship modes.

Er.pushpendra singh chundawat 1 74


There is one-to-one relationship of user-level
thread to the kernel-level thread.
One to One Model
This model provides more concurrency than the
many-to-one model.

It also allows another thread to run when a thread


makes a blocking system call.

It supports multiple threads to execute in parallel


on microprocessors.
Disadvantage of this model is that creating user
thread requires the corresponding Kernel thread.

OS/2, windows NT and windows 2000 use one to


one relationship model.

Er.pushpendra singh chundawat 1 75


Difference between User level and Kernel level threads
User-Level Threads

User-level threads are faster to create and manage.


Implementation is by a thread library at the user level.
User-level thread is generic and can run on any operating system.
Multi-threaded applications cannot take advantage of multiprocessing.

Kernel-Level Threads

Kernel-level threads are slower to create and manage.


Operating system supports creation of Kernel threads.
Kernel-level thread is specific to the operating system.
Kernel routines themselves can be multithreaded.

Er.pushpendra singh chundawat 1 76


InterProcess Communication (IPC)
• IPC is a set of programming interfaces that allow a programmer to coordinate
activities among different program processes that can run concurrently in an
operating system.

• This allows a program to handle many user requests at the same time.

• Since even a single user request may result in multiple processes running in the
operating system on the user's behalf, the processes need to communicate with
each other.

• The IPC interfaces make this possible.

• Each IPC method has its own advantages and limitations so it is not unusual for
a single program to use all of the IPC methods.

1 77
Er.pushpendra singh chundawat
Approaches IPC
File : A record stored on disk, or a record synthesized on demand by a file
server, which can be accessed by multiple processes.

Socket : A data stream sent over a network interface, either to a different


process on the same computer or to another computer on the network.

Typically byte-oriented, sockets rarely preserve message boundaries.

Data written through a socket requires formatting to preserve message


boundaries.

1 78
Er.pushpendra singh chundawat
Approaches Cont..
Pipe :
A unidirectional data channel.

Data written to the write end of the pipe is buffered by the operating system until it is
read from the read end of the pipe.

Two-way data streams between processes can be achieved by creating two pipes
utilizing standard input and output

Shared Memory :
Multiple processes are given access to the same block of memory which creates a
shared buffer for the processes to communicate with each other.

Er.pushpendra singh chundawat 1 79


Approaches Cont..
Message Passing :

Allows multiple programs to communicate using message queues and/or non-OS


managed channels, commonly used in concurrency models.

Message queue :

A data stream similar to a socket, but which usually preserves message boundaries.

Typically implemented by the operating system, they allow multiple processes to read
and write to the message queue without being directly connected to each other.

Er.pushpendra singh chundawat 1 80


Message passing
Message Passing provides a mechanism for processes to communicate and to
synchronize their actions without sharing the same address space.

IPC facility provides two operations:

• send (message)

•Receive (massage)

Er.pushpendra singh chundawat 1 81


Inter Process
Communication

1 82
Er.pushpendra singh chundawat
Why IPC

1 83
Er.pushpendra singh chundawat
Unicast and Multicast IPC

1 84
Er.pushpendra singh chundawat
Unicast IPC MultiCast IPC

1 85
Er.pushpendra singh chundawat
.

Er.pushpendra singh chundawat 1 86


Er.pushpendra singh chundawat 1 87
Er.pushpendra singh chundawat 1 88
Er.pushpendra singh chundawat 1 89
Shared Memory

• This is the another mechanism by which processes can


communicate with each other

• In this mechanism we declare a section of memory as shared


memory

• This shared memory section is used by communicating processes


simultaneously

• We have to synchronize the processes so that they don’t alter


shared memory simultaneously
Er.pushpendra singh chundawat 1 90
Allocating a Shared Memory

• A shared memory segment is allocated first.

• Header Files : #include<sys/types.h>


#include<sys/ipc.h>
#include<sys/shm.h>

• Shmget() - allocate a shared memory segment.

• Int shmget(key_t key , size_t size , int shmflg)

Er.pushpendra singh chundawat 1 91


Attaching and Detaching a shared memory

• shmat() – attaches the shared memory segment.

• Void *shmat(int shmid , const. void *shmaddr. , int shmflg);

• shmdt() – detaches the shared memory segment.

• It takes a pointer to the address returned by shmat() ; on success it


returns 0, on error it returns -1.

Er.pushpendra singh chundawat 1 92


Controlling the Shared Memory

• shmctl() – control the operations on shared memory.

• Int shmctl(int shmid , int cmd , struct ds *buf); cmd is one of the
following

• IPC_STAT
• IPC_SET
• IPC_RMID

• IPC_RMID – deletes the shared memory segment.

Er.pushpendra singh chundawat 1 93


Semaphores
Semaphores are used to synchronize the processes so that they
can’t access critical section simultaneously.

Semaphores is of two types.


Binary and General Semaphores

Binary semaphore : binary semaphore is a variable that can take only


the values 0 and 1.

General semaphore : general semaphore can take any positive


value. The two functions are associated with two values of binary
semaphore. wait() and signal().

Er.pushpendra singh chundawat 1 94


Semaphore functions
Header File : #include<sys/types.h>
#include<sys/ipc.h>
#include<sys/sem.h>

semget() - The semget function creates a new semaphore.


Int semget(key_t key , int num_sems , int semflag);

semop() – change the value of the semaphore


Int semop(int semid , struct sembuf *semops , size_t num_sem_ops);

semctl() – allow direct control of semaphore information.


int semctl(int sem_id , int sem_num , int cmd);

Er.pushpendra singh chundawat 1 95


What is Semaphore?
Semaphore is simply a variable that is non-negative and shared between
threads.

A semaphore is a signalling mechanism, and a thread that is waiting on a


semaphore can be signalled by another thread.

It uses two atomic operations,


1)wait, and 2) signal for the process synchronization.

A semaphore either allows or disallows access to the resource, which depends on


how it is set up.

1 96
Er.pushpendra singh chundawat
Characteristic of Semaphore
It is a mechanism that can be used to provide synchronization of tasks.

It is a low-level synchronization mechanism.

Semaphore will always hold a non-negative integer value.

Semaphore can be implemented using test operations and interrupts, which should
be executed using file descriptors.

1 97
Er.pushpendra singh chundawat
Types of Semaphores
The two common kinds of semaphores are

Counting semaphores
Binary semaphores.

Counting Semaphores
This type of Semaphore uses a count that helps task to be acquired or released
numerous times.
If the initial count = 0, the counting semaphore should be created in the unavailable
state.

Er.pushpendra singh chundawat 1 98


Cont

However, If the count is > 0, the semaphore is created in the available state, and
the number of tokens it has equals to its count.

Er.pushpendra singh chundawat 1 99


Cont..

Binary Semaphores

The binary semaphores are quite similar to counting semaphores, but their value is
restricted to 0 and 1.
In this type of semaphore, the wait operation works only if semaphore = 1, and the
signal operation succeeds when semaphore= 0. It is easy to implement than counting
semaphores.

Er.pushpendra singh chundawat 1 100


Cont..
Example of Semaphore
The below-given program is a step by step implementation, which involves usage and
declaration of semaphore.
Shared var mutex: semaphore = 1;
Process i begin . . P(mutex);
execute CS; V(mutex); . .
End;
Wait and Signal Operations in Semaphores
Both of these operations are used to implement process synchronization. The goal of
this semaphore operation is to get mutual exclusion.

Wait for Operation


This type of semaphore operation helps you to control the entry of a task into the critical
section. However, If the value of wait is positive, then the value of the wait argument X is
decremented. In the case of negative or zero value, no operation is executed. It is also
called P(S) operation.
After the semaphore value is decreased, which becomes negative, the command is held
up until the required conditions are satisfied.
Er.pushpendra singh chundawat 1 101
Cont..
Copy CodeP(S)
{
while (S<=0);
S--;
}

Signal operation

This type of Semaphore operation is used to control the exit of a task from a
critical section. It helps to increase the value of the argument by 1, which is
denoted as V(S).
Copy CodeP(S)
{
while (S>=0);
S++;
}
Er.pushpendra singh chundawat 1 102
Synchronization Hardware and Software
Some times the problems of the Critical Section are also resolved by hardware. Some
operating system offers a lock functionality where a Process acquires a lock when
entering the Critical section and releases the lock after leaving it.

So when another process is trying to enter the critical section, it will not be able to enter
as it is locked. It can only do so if it is free by acquiring the lock itself.

Mutex Locks

Synchronization hardware not simple method to implement for everyone, so strict


software method known as Mutex Locks was also introduced.

In this approach, in the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is released.

Er.pushpendra singh chundawat 1 103


Cont..
Semaphore is simply a variable that is non-negative and shared between threads.

It is another algorithm or solution to the critical section problem.

It is a signaling mechanism and a thread that is waiting on a semaphore, which can


be signaled by another thread.

It uses two atomic operations, 1)wait, and 2) signal for the process synchronization

Er.pushpendra singh chundawat 1 104


Preemptive Scheduling
Preemptive Scheduling is a scheduling method where the tasks are mostly
assigned with their priorities.

Sometimes it is important to run a task with a higher priority before another


lower priority task, even if the lower priority task is still running.

At that time, the lower priority task holds for some time and resumes when the
higher priority task finishes its execution.

Er.pushpendra singh chundawat 1 105


What is Non- Preemptive Scheduling?
In this type of scheduling method, the CPU has been allocated to a specific process.

The process that keeps the CPU busy will release the CPU either by switching context or
terminating.

It is the only method that can be used for various hardware platforms.

That's because it doesn't need specialized hardware (for example, a timer) like preemptive
Scheduling.

Non-Preemptive Scheduling occurs when a process voluntarily enters the wait state or
terminates.

Er.pushpendra singh chundawat 1 106


Er.pushpendra singh chundawat 1 107
Advantages of Preemptive Scheduling Disadvantages of Preemptive Scheduling
Here, are pros/benefits of Preemptive Here, are cons/drawback of Preemptive Scheduling
Scheduling method: method:
Preemptive scheduling method is more
robust, approach so one process cannot Need limited computational resources for Scheduling
monopolize the CPU
Choice of running task reconsidered after Takes a higher time by the scheduler to suspend the
each interruption. running task, switch the context, and dispatch the new
Each event cause interruption of running incoming task.
tasks
The OS makes sure that CPU usage is the The process which has low priority needs to wait for a
same by all running process. longer time if some high priority processes arrive
In this, the usage of CPU is the same, i.e., continuously.
all the running processes will make use of
CPU equally.
This scheduling method also improvises the
average response time.
Preemptive Scheduling is beneficial when
we use it for the multi-programming
environment.
Er.pushpendra singh chundawat 1 108
Advantages of Non-preemptive Scheduling Disadvantages of Non-Preemptive Scheduling

Here, are pros/benefits of Non-preemptive Scheduling Here, are cons/drawback of Non-Preemptive


method: Scheduling method:

Offers low scheduling overhead It can lead to starvation especially for those real-
time tasks
Tends to offer high throughput
Bugs can cause a machine to freeze up
It is conceptually very simple method
It can make real-time and priority Scheduling
Less computational resources need for Scheduling difficult

Poor response time for processes

Er.pushpendra singh chundawat 1 109


Example of Non-Preemptive Scheduling
In non-preemptive SJF scheduling, once the CPU cycle is allocated to process, the process
holds it till it reaches a waiting state or terminated.
Consider the following five processes each having its own unique burst time and arrival time.

Process Queue Arrival Time Burst Time


P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4

Step 0) At time=0, P4 arrives and starts execution.


Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to complete. It will
continue execution.
Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will continue
execution.

Er.pushpendra singh chundawat 1 110


Cont..
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.

Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue
execution.

Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue
execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is
compared. Process P2 is executed because its burst time is the lowest.

Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.

Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.

Step 10) At time = 23, process P3 will finish its execution.


Er.pushpendra singh chundawat 1 111
Cont..
Step 11) Let's calculate the average waiting time for above example.

Er.pushpendra singh chundawat 1 112


Example of Pre-emptive Scheduling
Consider this following three processes in Round-robin
Process Queue Burst time Time Slice=2
P1 4
P2 3
P3 5

Step 1) The execution begins with process P1, which has burst time 4. Here, every process
executes for 2 seconds. P2 and P3 are still in the waiting queue.

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts executing.

Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts executing.

Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts execution

Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2 completes
execution. Then, P3 starts execution till it completes.
Step 7) Let's calculate the average waiting time for above example.
Er.pushpendra singh chundawat 1 113
KEY DIFFERENCES
In Preemptive Scheduling, the CPU is allocated to the processes for a
specific time period, and non-preemptive scheduling CPU is allocated to
the process until it terminates.

In Preemptive Scheduling, tasks are switched based on priority while non-


preemptive Schedulign no switching takes place.

Preemptive algorithm has the overhead of switching the process from the
ready state to the running state while Non-preemptive Scheduling has no
such overhead of switching.

Preemptive Scheduling is flexible while Non-preemptive Scheduling is rigid.

Er.pushpendra singh chundawat 1 114


Er.pushpendra singh chundawat 1 115
Er.pushpendra singh chundawat 1 116
Er.pushpendra singh chundawat 1 117
Er.pushpendra singh chundawat 1 118
Er.pushpendra singh chundawat 1 119
Er.pushpendra singh chundawat 1 120
Er.pushpendra singh chundawat 1 121
Process Synchronization: Critical Section Problem in OS
Process Synchronization is the task of coordinating the execution of processes
in a way that no two processes can have access to the same shared data and
resources.

It is specially needed in a multi-process system when multiple processes are


running together, and more than one processes try to gain access to the same
shared resource or data at the same time.

This can lead to the inconsistency of shared data. So the change made by one
process not necessarily reflected when other processes accessed the same
shared data.

To avoid this type of inconsistency of data, the processes need to be


synchronized with each other.

1 122
Er.pushpendra singh chundawat
How Process Synchronization Works?

For Example, process A changing the data in a


memory location while another process B is trying
to read the data from the same memory location.
There is a high probability that data read by the
second process will be erroneous.
Memory

1 123
Er.pushpendra singh chundawat
Sections of a Program
Here, are four essential elements of the critical section:

Entry Section: It is part of the process which decides the entry of a particular process.

Critical Section: This part allows one process to enter and modify the shared variable.

Exit Section: Exit section allows the other process that are waiting in the Entry Section,
to enter into the Critical Sections. It also checks that a process that finished its
execution should be removed through this Section.

Remainder Section: All other parts of the Code, which is not in Critical, Entry, and Exit
Section, are known as the Remainder Section.

Er.pushpendra singh chundawat 1 124


What is Critical Section Problem?
A critical section is a segment of code which can be accessed by a signal process at a
specific point of time. The section consists of shared data resources that required to be
accessed by other processes.

The entry to the critical section is handled by the wait() function, and it is represented as
P().

The exit from a critical section is controlled by the signal() function, represented as V().

In the critical section, only a single process can be executed.

Other processes, waiting to execute their critical section, need to wait until the current
process completes its execution.

Er.pushpendra singh chundawat 1 125


Rules for Critical Section

The critical section need to must enforce all three rules:

Mutual Exclusion: Mutual Exclusion is a special type of binary semaphore which is


used for controlling access to the shared resource. It includes a priority inheritance
mechanism to avoid extended priority inversion problems. Not more than one process
can execute in its critical section at one time.

Progress: This solution is used when no one is in the critical section, and someone
wants in. Then those processes not in their reminder section should decide who should
go in, in a finite time.

Bound Waiting: When a process makes a request for getting into critical section, there
is a specific limit about number of processes can get into their critical section. So, when
the limit is reached, the system must allow request to the process to get into its critical
section.
Er.pushpendra singh chundawat 1 126
Solutions To The Critical Section
In Process Synchronization, critical section plays the main role so that the problem must
be solved.

Here are some widely used methods to solve the critical section problem.

Peterson Solution

Peterson's solution is widely used solution to critical section problems. This algorithm
was developed by a computer scientist Peterson that's why it is named as a Peterson's
solution.

In this solution, when a process is executing in a critical state, then the other process
only executes the rest of the code, and the opposite can happen. This method also
helps to make sure that only a single process runs in the critical section at a specific
time.

Er.pushpendra singh chundawat 1 127


PROCESS Pi
Cont.. FLAG[i] = true
while( (turn != i) AND (CS is !free) )
{ wait; }
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS
Assume there are N processes (P1, P2, ... PN) and every process at some
point of time requires to enter the Critical Section

A FLAG[] array of size N is maintained which is by default false. So,


whenever a process requires to enter the critical section, it has to set its
flag as true.

For example, If Pi wants to enter it will set FLAG[i]=TRUE.


Another variable called TURN indicates the process number which is
currently wating to enter into the CS.

The process which enters into the critical section while exiting would
change the TURN to another number from the list of ready processes.
Example: turn is 2 then P2 enters the Critical section and while exiting
turn=3 and therefore P3 breaks out of wait loop.
Er.pushpendra singh chundawat 1 128
Synchronization Hardware and Software
Some times the problems of the Critical Section are also resolved by hardware. Some
operating system offers a lock functionality where a Process acquires a lock when
entering the Critical section and releases the lock after leaving it.

So when another process is trying to enter the critical section, it will not be able to enter
as it is locked. It can only do so if it is free by acquiring the lock itself.

Mutex Locks

Synchronization hardware not simple method to implement for everyone, so strict


software method known as Mutex Locks was also introduced.

In this approach, in the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is released.

Er.pushpendra singh chundawat 1 129


Cont..
Semaphore is simply a variable that is non-negative and shared between threads.

It is another algorithm or solution to the critical section problem.

It is a signaling mechanism and a thread that is waiting on a semaphore, which can


be signaled by another thread.

It uses two atomic operations, 1)wait, and 2) signal for the process synchronization

Er.pushpendra singh chundawat 1 130


Preemptive Scheduling
Preemptive Scheduling is a scheduling method where the tasks are mostly
assigned with their priorities.

Sometimes it is important to run a task with a higher priority before another


lower priority task, even if the lower priority task is still running.

At that time, the lower priority task holds for some time and resumes when the
higher priority task finishes its execution.

Er.pushpendra singh chundawat 1 131


What is Non- Preemptive Scheduling?
In this type of scheduling method, the CPU has been allocated to a specific process.

The process that keeps the CPU busy will release the CPU either by switching context or
terminating.

It is the only method that can be used for various hardware platforms.

That's because it doesn't need specialized hardware (for example, a timer) like preemptive
Scheduling.

Non-Preemptive Scheduling occurs when a process voluntarily enters the wait state or
terminates.

Er.pushpendra singh chundawat 1 132


Er.pushpendra singh chundawat 1 133
Advantages of Preemptive Scheduling Disadvantages of Preemptive Scheduling
Here, are pros/benefits of Preemptive Here, are cons/drawback of Preemptive Scheduling
Scheduling method: method:
Preemptive scheduling method is more
robust, approach so one process cannot Need limited computational resources for Scheduling
monopolize the CPU
Choice of running task reconsidered after Takes a higher time by the scheduler to suspend the
each interruption. running task, switch the context, and dispatch the new
Each event cause interruption of running incoming task.
tasks
The OS makes sure that CPU usage is the The process which has low priority needs to wait for a
same by all running process. longer time if some high priority processes arrive
In this, the usage of CPU is the same, i.e., continuously.
all the running processes will make use of
CPU equally.
This scheduling method also improvises the
average response time.
Preemptive Scheduling is beneficial when
we use it for the multi-programming
environment.
Er.pushpendra singh chundawat 1 134
Advantages of Non-preemptive Scheduling Disadvantages of Non-Preemptive Scheduling

Here, are pros/benefits of Non-preemptive Scheduling Here, are cons/drawback of Non-Preemptive


method: Scheduling method:

Offers low scheduling overhead It can lead to starvation especially for those real-
time tasks
Tends to offer high throughput
Bugs can cause a machine to freeze up
It is conceptually very simple method
It can make real-time and priority Scheduling
Less computational resources need for Scheduling difficult

Poor response time for processes

Er.pushpendra singh chundawat 1 135


Example of Non-Preemptive Scheduling
In non-preemptive SJF scheduling, once the CPU cycle is allocated to process, the process
holds it till it reaches a waiting state or terminated.
Consider the following five processes each having its own unique burst time and arrival time.

Process Queue Arrival Time Burst Time


P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4

Step 0) At time=0, P4 arrives and starts execution.


Step 1) At time= 1, Process P3 arrives. But, P4 still needs 2 execution units to complete. It will
continue execution.
Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will continue
execution.

Er.pushpendra singh chundawat 1 136


Cont..
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.

Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will continue
execution.

Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will continue
execution.

Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and P2 is
compared. Process P2 is executed because its burst time is the lowest.

Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.

Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.

Step 9) At time = 15, process P5 will finish its execution.

Step 10) At time = 23, process P3 will finish its execution.


Er.pushpendra singh chundawat 1 137
Cont..
Step 11) Let's calculate the average waiting time for above example.

Er.pushpendra singh chundawat 1 138


Example of Pre-emptive Scheduling
Consider this following three processes in Round-robin
Process Queue Burst time Time Slice=2
P1 4
P2 3
P3 5

Step 1) The execution begins with process P1, which has burst time 4. Here, every process
executes for 2 seconds. P2 and P3 are still in the waiting queue.

Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing

Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts executing.

Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts executing.

Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts execution

Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2 completes
execution. Then, P3 starts execution till it completes.
Step 7) Let's calculate the average waiting time for above example.
Er.pushpendra singh chundawat 1 139
KEY DIFFERENCES
In Preemptive Scheduling, the CPU is allocated to the processes for a
specific time period, and non-preemptive scheduling CPU is allocated to
the process until it terminates.

In Preemptive Scheduling, tasks are switched based on priority while non-


preemptive Schedulign no switching takes place.

Preemptive algorithm has the overhead of switching the process from the
ready state to the running state while Non-preemptive Scheduling has no
such overhead of switching.

Preemptive Scheduling is flexible while Non-preemptive Scheduling is rigid.

Er.pushpendra singh chundawat 1 140


REFERENCES
Text/Reference Books:

1. A. Silberschatz and Peter B Galvin: Operating System Principals, Wiley India


Pvt. Ltd.

2. Achyut S Godbole: Operating Systems, Tata McGraw Hill

3. Tanenbaum: Modern Operating System, Prentice Hall.

4. DM Dhamdhere: Operating Systems – A Concepts Based Approach, Tata


McGraw Hill 5.

5. Charles Crowly: Operating System A Design – Oriented Approach, Tata


McGraw Hill.

Er.pushpendra singh chundawat 1 141


Er.pushpendra singh chundawat 142
B.Umamaheswari (Asst. Prof, CSE) , JECRC, JAIPUR

You might also like