0% found this document useful (0 votes)
25 views185 pages

Os Notes

The document outlines the course content for the Operating System (BCS401) for B.Tech. II Year students at Meerut Institute of Engineering & Technology. It includes the vision and mission of the institute and department, course outcomes, detailed syllabus, and a course plan. The syllabus covers various topics such as OS functions, process management, memory management, and I/O management, aimed at equipping students with essential knowledge and skills in operating systems.

Uploaded by

aryanagg2424
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views185 pages

Os Notes

The document outlines the course content for the Operating System (BCS401) for B.Tech. II Year students at Meerut Institute of Engineering & Technology. It includes the vision and mission of the institute and department, course outcomes, detailed syllabus, and a course plan. The syllabus covers various topics such as OS functions, process management, memory management, and I/O management, aimed at equipping students with essential knowledge and skills in operating systems.

Uploaded by

aryanagg2424
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 185

MEERUT INSTITUTE OF ENGINEERING & TECHNOLOGY,

MEERUT

Course Content
for
Operating System (Sub Code-BCS401)

B.Tech. IIYear
CSE, CS, IT, CS-IT, CSE (AI), CSE (AI&ML), CSE
(DS) and CSE (IOT)

DR. A.P.J. ABDUL KALAM TECHNICAL UNIVERSITY LUCKNOW

Operating System (BCS401) Page 1


Vision of Institute

To be an outstanding institution in the country imparting technical education, providing need-based,


value-based and career-based programs and producing self-reliant, self-sufficient technocrats
capable of meeting new challenges.

Mission of Institute

The mission of the institute is to educate young aspirants in various technical fields to fulfill global
requirements of human resources by providing sustainable quality education, training and
invigorating environment besides molding them into skilled competent and socially responsible
citizens who will lead the building of a powerful nation.

Vision of Department

To become a globally recognized department where talented frontier the Internet of things (IoT) are
nurtured to meet the need of industry, society, and economy to serve the nation and society.

Mission of Department
To provide resources of excellence and fresh minds into highly competent IoT application
development, and enhance their knowledge and skills through covering technologies and multi-
disciplinary engineering practices.
To equip students and provide the state - of- the art facilities to develop industry-ready IoT systems.
To promote industry collaborations to have the best careers.

Operating System (BCS401) Page 2


Course Outcome

BCS 401 OPERATING SYSTEM


Course Outcome ( CO) Bloom’s Knowledge Level (KL)
At the end of course , the student will be able to:
CO 1 Understand the structure and functions of OS K1, K2
CO 2 Learn about Processes, Threads and Scheduling algorithms. K1, K2
CO 3 Understand the principles of concurrency and Deadlocks K2
CO 4 Learn various memory management scheme K2
CO5 Study I/O management and File systems. K2, K4
DETAILED SYLLABUS 3-0-0

Unit LECTURE Proposed


Lecture
Introduction : Operating system and functions, Classification of Operating systems- Batch,
Interactive, Time sharing, Real Time System, Multiprocessor Systems, Multiuser Systems,
I Multiprocess Systems, Multithreaded Systems, Operating System Structure- Layered structure, 08
System Components, Operating System services, Reentrant Kernels, Monolithic and Microkernel
Systems.

Concurrent Processes: Process Concept, Principle of Concurrency, Producer / Consumer Problem,


II Mutual Exclusion, Critical Section Problem, Dekker’s solution, Peterson’s solution, Semaphores, 08
Test and Set operation; Classical Problem in Concurrency- Dining Philosopher Problem, Sleeping
Barber Problem; Inter Process Communication models and Schemes, Process generation.
CPU Scheduling: Scheduling Concepts, Performance Criteria, Process States, Process Transition
III Diagram, Schedulers, Process Control Block (PCB), Process address space, Process identification 08
information, Threads and their management, Scheduling Algorithms, Multiprocessor Scheduling.
Deadlock: System model, Deadlock characterization, Prevention, Avoidance and detection,
Recovery from deadlock
Memory Management: Basic bare machine, Resident monitor, Multiprogramming with fixed
IV partitions, Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, 08
Paged segmentation, Virtual memory concepts, Demand paging, Performance of demand paging,
Page replacement algorithms, Thrashing, Cache memory organization, Locality of reference.
I/O Management and Disk Scheduling: I/O devices, and I/O subsystems, I/O buffering, Disk
storage and disk scheduling, RAID. File System: File concept, File organization and access
V 08
mechanism, File directories, and File sharing, File system implementation issues, File system
protection and security.

Operating System (BCS401) Page 3


Course Plan
MEERUT INSTITUTE OF ENGINEERING & TECHNOLOGY
(Department of Computer Science & Engineering & Allied branches)
Subject: Operating System
Sub Code: BCS 401
(Add lectures related to Pre-requisite and Content Beyond Syllabus / LECTUREs to cover Gap
etc.)

Date
Lecture Cox Lecture LECTURE Date Planned
Executed
No.
Introduction : Operating system and
1 CO-1
functions
Classification of Operating systems Batch,
2 CO-1
Interactive, Time sharing, Real Time System
Multiprocessor Systems, Multiuser Systems,
3 CO-1 Multiprocess Systems, Multithreaded
Systems
Operating System services, System
4 CO-1
Components
Operating System Structure- Layered
5 CO-1
structure, Monolithic Kernel
6 CO-1 Microkernel Systems, Reentrant Kernel
Scheduling concepts, Performance criteria,
7 CO-2
Process States, Process Transition Diagram
8 CO-2 Schedulers, Process Control Block (PCB),
Process address space, Process
9 CO-2 identification information, Threads and their
management
10 CO-2 Scheduling: FCFS,SJF.
11 CO-2 PRIORITY Scheduling
12 CO-2 RR,Multilevel queue
Multilevel Feedback Queue,Multiprocessor
13 CO-2
Scheduling,Threads and their management

Operating System (BCS401) Page 4


14 CO-2 Deadlock
15 CO-2 Methods to prevent Deadlock
16 CO-2 Deadlock Prevention
17 CO-2 Deadlock Avoidance
18 CO-2 Deadlock Recovery
19 CO-3 Concurrent Processes: Process Concept
20 CO-3 Principle of Concurrency
21 CO-3 Producer / Consumer Problem
22 CO-3 Mutual Exclusion
23 CO-3 Peterson’s solution,Test and Set operation;
Classical Problem in Concurrency- Dining
24 CO-3
Philosopher Problem
25 CO-3 Sleeping Barber Problem
26 CO-3 Semaphores
27 CO-3 Semaphores Solution
28 CO-3 Critical Section Problem
29 CO-3 Numerical on Critical Section Problem
30 CO-3 Solution to Critical Section Problem
31 CO-4 Memory management
32 CO-4 Techniques of Memory management
33 CO-4 Paging
34 CO-4 Paging Numericals
Difference between paging and
35 CO-4
segmentation
36 CO-4 Segmentation
37 CO-4 Segmentation Numericals
38 CO-4 Paged Segmentation
39 CO-4 Page replacement algorithms
40 CO-4 Numericals on Page replacement algorithms
41 CO-4 LRU, Optimal algorithms
42 CO-4 Numericals on LRU, Optimal algorithms
43 CO-4 Thrashing

Operating System (BCS401) Page 5


Cache memory organization, Locality of
44 CO-4
reference
45 CO-5 I/O management
46 CO-5 Disk Scheduling
47 CO-5 Disk Scheduling Algorithms
48 CO-5 Numericals on Disk Scheduling Algorithms
49 CO-5 RAID
50 CO-5 File management
51 CO-5 Protection and security, Value Addition
52 CO-5 Numericals on File management

Operating System (BCS401) Page 6


UNIT-1(INTRODUCTION)
Syllabus of CO-1:Operating system and functions, Classification of Operating systems- Batch, Interactive,
Time sharing, Real Time System, Multiprocessor Systems, Multiuser Systems, Multiprocessor Systems,
Multithreaded Systems, Operating System Structure- Layered structure, System Components, Operating
System services, Reentrant Kernels, Monolithic and Microkernel Systems.

Lecture-1

1.1 Introduction of Course outcomes and over view of the syllabus

1.2 What is Operating System :

The operating system (OS) is one of the programs that run on the hardware and enables the user to communicate with it
by sending input commands and output commands. It allows the user, computer applications, and system hardware to
connect with one another, therefore the operating system acts as a hub. An operating system (OS) is the program that,
after being initially loaded into the computer by a boot program, manages all of the other application programs in a
computer. The application programs make use of the operating system by making requests for services through a defined
application program interface (API) In addition, users can interact directly with the operating system through a user
interface, such as a command - line interface (CLI) or a graphical UI (GUI) Without operating system, a computer and
software must be useless An Operating System can be defined as an interface between user and hardware. It is
responsible for the execution of all the processes, Resource Allocation, CPU management, File Management and many
other tasks.
1.3 Why use an operating system?
An operating system brings powerful benefits to computer software and software development. Without an operating
system, every application would need to include its own UI, as well as the comprehensive code needed to handle all low -
level functionality of the underlying computer, such as disk storage, network interfaces and so on. Considering the vast
array of underlying hardware available, this would vastly bloat the size of every application and make software
development impractical. Many common tasks, such as sending a network packet or displaying text on a standard output
device, such as a display, can be offloaded to system software that serves as an intermediary between the applications and
the hardware. The system software provides a consistent and repeatable way for applications to interact with the hardware
without the applications needing to know any details about the hardware. As long as each application accesses the same
resources and services in the same way, that system software -- the operating system -- can service almost any number of
applications. This vastly reduces the amount of time and coding required developing and debugging an application, while
ensuring that users can control, configure and manage the system hardware through a common and well - understood
interface. Ones to interact with the hardware without the applications needing to know any details about the hardware.
Once installed, the
Operating system relies on a vast library of device drivers to tailor OS services to the specific hardware environment.
Thus, every application may make a common call to a storage device, but the OS receives that call and uses the
corresponding driver to translate the call into actions (commands) needed for the underlying hardware on that specific
computer. the operating system provides a comprehensive platform that identifies, configures and manages a range of
hardware, including processors; memory devices and memory management; chipsets; storage; networking; port
communication, such as Video Graphics Array (VGA), High - Definition Multimedia Interface (HDMI) and Universal
Serial Bus (USB); and subsystem interfaces, such as Peripheral Component Interconnect Express (PCIe).
1.4 What are Characteristics of Operating System:
Virtualization: Operating systems can provide Virtualization capabilities, allowing multiple operating systems or instances
of an operating system to run on a single physical machine. This can improve resource utilization and provide isolation
between different operating systems or applications.

Networking: Operating systems provide networking capabilities, allowing the computer system to connect to other systems
and devices over a network. This can include features such as network protocols network interfaces, and network security

Operating System (BCS401) Page 7


Scheduling: Operating systems provide scheduling algorithms that determine the order in which tasks are executed on the
system.
These algorithms prioritize tasks based on their resource requirements and other factors to optimize system performance.
Interposes Communication: Operating systems provide mechanisms for applications to communicate with each other,
allowing them to share data and coordinate their activities.
Performance Monitoring: Operating systems provide tools for monitoring system performance, including CPU usage,
memory usage, disk usage, and network activity. This can help identify performance bottlenecks and optimize system
performance.
Backup and Recovery: Operating systems provide backup and recovery mechanisms to protect data in the event of system
failure or data loss.
Debugging: Operating systems provide debugging tools that allow developers to identify and fix software bugs and other
issues in the system.

1.5 What are Operating System Components

The components of an operating system play a key role to make a variety of computer system parts work together. There
are the following components of an operating system, such as:
• Hardware
• Application Program
• Operating System
• Users

Hardware: Computer hardware is a collective term used to describe any of the physical components of an analog or digital
computer. Computer hardware can be categorized as being either internal or external components. Generally, internal
hardware components are those necessary for the proper functioning of the computer, while external hardware components
are attached to the computer to add or enhance functionality.
Operating System: The operating system (OS) is one of the programs that run on the hardware and enables the user to
communicate with it by sending input commands and output commands. It allows the user, computer applications, and
system hardware to connect with one another, therefore the operating system acts as a hub. Without an operating system, a
computer and software must be useless.
User: Users perform the computation with the help of an application program. A user is someone or something that wants or
needs access to a system's resources; another word for user is client. A user can be a real person sitting on the Windows
operating system, a user refers to a person who has an account on the computer or device. Users can have different levels of
access and permissions depending on their account type. There are several types of users in Windows OS:
Application Program: Applications programs are programs written to solve specific problems, to produce specific reports,
or to update specific files. A computer program that performs useful work on behalf of the user of the computer (for
example a word processing or accounting program) as opposed to the SYSTEM SOFTWARE which manages the running of
the computer itself, or to the DEVELOPMENT software which is used by programmers to create other programs. An
application program is typically self-contained, storing data within files of a special (often proprietary) format that it can
create, open for editing and save to disk.

Abstract view of the components of a computer system

Operating System (BCS401) Page 8


Figure:1.1 (Abstract view of the components of computer)
1.6 What are the Functions of Operating System:

Booting: The process of starting or restarting the computer is known as booting. If the computer is switched off completely
and if turned on then it is called cold booting. Warm booting is a process of using the operating system to restart the
computer.

● Uses Diagnostic routines to test systems for equipment failure.


● Copies BIOS (Basic input output system) programs from ROM chips to main memory (RAM).
● Loads operating system into computer’s main memory (RAM).
Memory Management: The operating system manages the Primary Memory or Main Memory. Main memory is made up of
a large array of bytes or words where each byte or word is assigned a certain address. Main memory is fast storage and it can
be accessed directly by the CPU. For a program to be executed, it should be first loaded in the main memory. An Operating
System performs the following activities for Memory Management. It keeps track of primary memory, i.e., which bytes of
memory are used by which user program. The memory addresses that have already been allocated and the memory addresses
of the memory that has not yet been used. In multiprogramming, the OS decides the order in which processes are granted
memory access, and for how long. It allocates the memory to a process when the process requests it and deal locates the
memory when the process has terminated or is performing an I/O operation.

Processor Management: In a multiprogramming environment, the OS decides the order in which processes have access
to the processor, and how much processing time each process has. This function of the OS is called Process Scheduling.
An Operating System performs the following activities for Processor Management: Keeps track of the status of processes.
The program which performs this task is known as a traffic controller. Allocates the CPU that is a processor to a process.
De- allocates the processor when a process is no longer required.

Device Management: An OS manages device communication via its respective drivers. It performs the following activities
for device management. Keeps track of all devices connected to the system. Designates a program responsible for every
device known as the Input/output controller. Decides which process gets access to a certain device and for how long.
Allocates devices effectively and efficiently. Deal locates devices when they are no longer required.

Process Management: The process is a program under the execution. The operating system manages all the processes so
that each process gets the CPU for a specific time to execute itself, and there will be less waiting time for each process.
This management is also called process scheduling. For process scheduling operating system uses various algorithms:
FCFS, SJF, LJF, ROUND ROBIN, PRIORITY SCHEDULING ALGORITHM.

File Management: A file system is organized into directories for efficient or easy navigation and usage. These directories
may contain other directories and other files. An Operating System carries out the following file management activities. It
keeps track of where information is stored, user access settings, the status of every file, and more… These facilities are
collectively known as the file system.
User Interface or Command Interpreter:
The user interacts with the computer system through the operating system. Hence the OS acts as an interface
between the user and the computer hardware. This user interface is offered through a set of commands or a
graphical user interface (GUI). Through this interface, the user makes interaction with the applications and
the machine hardware.
Security: The operating system uses password protection to protect user data and similar other techniques. it also prevents
unauthorized access to programs and user data.

Job Accounting: The operating system Keeps track of time and resources used by various tasks and users, this information
can be used to track resource usage for a particular user or group of users.
Error-detecting aids: The operating system constantly monitors the system to detect errors and avoid malfunctioning
computer systems.

Operating System (BCS401) Page 9


Coordination between other software and users: Operating systems also coordinate and assign interpreters, compilers,
assemblers, and other software to the various users of the computer systems.
Performs basic computer tasks: The management of various peripheral devices such as the mouse, keyboard, and printer
are carried out by the operating system. Today most operating systems are plug-and-play. These operating systems.
Automatically recognize and configure the devices with no user interference.
Network Management: The OS provides network connectivity and manages communication between computers on a
network. It also manages network security by providing firewalls and other security measures.

Lecture-2

2.1What are the Goals of Operating System?

Convenience: (user friendly) an OS makes a computer more convenient to use.

Efficiency: An OS allows the computer system resources to be used in an efficient manner.


Portability: A portable operating system can be carried on a physical drive and is compatible with a wide range of hardware
systems. Most portable operating systems are small and come with a CD or USB drive. The process of executing an OS
from a CD/USB drive is known as using a live CD or USB.
Reliability: We consider an operating system to be reliable if it delivers the expected service without any interruptions
during the normal operating mode, where a normal operating mode is defined as the execution environment free from
external factors, such as a critical hardware failure.
Scalability: Scalability is the measure of a system's ability to increase or decrease in performance and cost in response to
changes in application and system processing demands.
Robustness: Robustness is the ability of a computer system to cope with errors during execution and cope with erroneous
input. Robustness can encompass many areas of computer science, such as robust programming, robust machine learning,
and Robust Security Network.
Ability to evolve: An OS should be constructed in such a way as to permit the effective development, testing, and
introduction of new system functions without interfering with service.

2.3 Classification of Operating System

Batch Operating System

Batch processing was very popular in the 1970s. The jobs were executed in batches. People used to have a single
Computer known as a mainframe. Users using batch operating systems do not interact directly with the computer. Each user
prepares their job using an offline device like a punch card and submitting it to the computer operator. Jobs with similar
requirements are grouped and executed as a group to speed up processing. Once the programmers have left their programs
with the operator, they sort the programs with similar needs into batches. The batch operating system grouped jobs that
perform similar functions. These job groups are treated as a batch and executed simultaneously. A computer system with this
operating system performs the following batch processing activities.

● A job is a single unit that consists of a preset sequence of commands, data, and programs.
● Processing takes place in the order in which they are received, i.e., first come, first serve.
● These jobs are stored in memory and executed without the need for manual
information. When a job is successfully run, the operating system releases its memory.

Operating System (BCS401) Page 10


Types of Batch Operating System

There are mainly two types of the batch operating system. These are as follows:

1. Simple Batched System


2. Multi-programmed batched system

Simple Batched System


The user did not directly interact with the computer system for job execution in a simple batch operating system. However,
the user was required to prepare a job that included the program, control information, and data on the nature of the job on
control cards. The job was then submitted to the computer operator, who was usually in the form of a punch card. The
program's output included results and registers and memory dumps in the event of a program error. The output appeared
after some time that could take days, hours, and minutes. Its main role was to transfer control from one job to another. Jobs
with similar requirements were pooled together and processed through the processor to improve processing speed. The
operators were used in the program to create batches with similar needs. The computer runs the batches one by one when
they become available. This system typically reads a sequence of jobs, each with its control cads and predefined job tasks.

Multi-programmed batched system


Spooling deals with many jobs that have already been read and are waiting to run on disk. A disk containing a pool of jobs
allows the operating system to choose which job to run next to maximize CPUutilization. Jobs that come on magnetic tape or
cards directly cannot be run in a different order. Jobs run sequentially because they are executed in a first-come, first-served
manner. When various jobs are stored on a direct access device, job scheduling becomes possible like a disk. Multi-
programming is an important feature of job scheduling. For overlapped I/O, spooling and offline operations have their
limitations. Generally, a single user could not maintain all of the input/output devices, and CPU buys at all times. In the
multi-programmed batched system, jobs are grouped so that the CPU only executes one job at a time to improve CPU
utilization. The operating system maintains various jobs in memory at a time. The operating system selects one job and
begins executing it in memory. Finally, the job must wait for a task to complete, such as mounting a tape on an I/O operation.
In a multiprogramming system, do not sit idle because the operating system switches to another task. When a job is in the
wait state, and the current job is completed, the CPU is returned.

Why are Batch Operating Systems used?

Batch operating systems load less stress on the CPU and include minimal user interaction, and that is why you can
still use them nowadays. Another benefit of batch operating systems is that huge repetitive jobs may be done without
interacting with the computer to notify the system that you need to perform after you finish that job.
● Old batch operating systems weren't interactive, which means that the user did not interact with the program while
executing it.
● Modern batch operating systems now support interactions. For example, you may schedule the job, and
when the specified time arrives, the computer acknowledges the processor that the time is up.
How does the Batch Operating System work?
● The operating system keeps the number of jobs in memory and performs them one at a time.
● Jobs are processed in a first-come, first-served manner.
● Each job set is defined as a batch. When a task is finished, its memory is freed, and the work’s
output is transferred into an output spool for later printing or processing.
● User interaction is limited in the batch operating system. When the system takes the task from the user, the
users free.
● You may also use the batch processing system to update data relating to any transactions or records.

Operating System (BCS401) Page 11


Role of Batch Operating System
● A batch operating system's primary role is to execute jobs in batches automatically.
● The main task of a batch processing system is done by the 'Batch Monitor', which is located at the low end of
the main memory.
● This technique was made possible by the development of hard disk drives and card readers. The jobs
can now be stored on a disk to form a pool of jobs for batch execution.
● After that, they are grouped with similar jobs being placed in the same batch. As a result, the batch
Operating system automatically ran the batched jobs one after the other, saving time by performing
tasks only once.
● It resulted from a better system due to reduced turnaround time.

Characteristics of Batch Operating System.

There are various characteristics of the Batch Operating System. Some of them are as follows:

● In this case, the CPU executes the jobs in the same sequence that they are sent to it by the operator, which implies
that the task sent to the CPU first will be executed first. It's also known as the 'first come, first serve'.
● The word job refers to the command or instruction that the user and the program should perform.
● A batch operating system runs a set of user-supplied instructions composed of distinct instructions and programs
with several similarities.
● When a task is successfully executed, the OS releases the memory space held by that job.
● The user does not interface directly with the operating system in a batch operating system; rather, all instructions
are sent to the operator.
● The operator evaluates the user's instructions and creates a set of instructions having similar properties.
Advantages
There are various advantages of the Batch Operating System. Some of them are as follows:

● It isn't easy to forecast how long it will take to complete a job; only batch system processors know how long
it will take to finish the job in line.
● This system can easily manage large jobs again and again.
● The batch process can be divided into several stages to increase processing speed.
● When a process is finished, the next job from the job spool is run without any user interaction.
● CPU utilization gets improved.

Disadvantages

There are various disadvantages of the Batch Operating System. Some of them are as follows:

● When a job fails once, it must be scheduled to be completed, and it may take a long time to complete the task.
● Computer operators must have full knowledge of batch systems.
● The batch system is quite difficult to debug.
● The computer system and the user have no direct interaction.
● If a job enters an infinite loop, other jobs must wait for an unknown period of
time.
Uniprogramming Operating System
Uniprogramming implies that only a single task or program is in the main memory at a particular time. It was more common in
the initial computers and mobiles where one can run only a single application at time.

Operating System (BCS401) Page 12


Characteristics of Uniprogramming:
● It allows only one program to sit in the memory at one time.
● The size is small as only one program is present.
● The resources are allocated to the program that is in the memory at that time.
Advantages of uniprogramming
● The Uniprogramming memory management system is moderate without bugs.
● It additionally executes with minimal overhead.
● Once an application is stacked, that application is ensured 100% of the processor's time, since no different
procedure will intrude on it.

Disadvantages of uni-programming:
● Wastage of CPU time.
● No user interaction.
● No mechanism to prioritize processes.

Multiprogramming Operating System

Multiprogramming OS is an ability of an operating system that executes more than one program using a single processor
machine .More than one task or program or jobs are present inside the main memory at one point of time. Buffering and
spooling can overlap I/O and CPU tasks to improve the system performance but it has some limitations that a single user
cannot always keep CPU or I/O busy all the time. To increase resource utilization, multiprogramming approaches. The OS
could pick and start the execution of one of the jobs in memory, whenever the jobs does not need CPU that means the job is
working with I/O at that time the CPU is idle at that time the OS switches to another job in memory and CPU executes a
portion of it till the job issues a request for I/O and so on. Let’s P1 and P2 are two programs present in the main memory.
The OS picks one program and starts executing it. During execution if the P1 program requires I/O operation, then the OS
will simply switch over to the P2 program. If the p2 program requires I/O then again it switches to P3 and so on. If there is
no other program remaining after P3 then the CPU will pass its control back to the previous program.

Features of Multiprogramming

● Response time will be lesser and better source utilization.


● Best source utilization.
● It may help to improve turnaround time for any type of time task.
● The resources are utilized smartly.
● Various users may use the multiprogramming system at once.

How do Multiprogramming Operating Systems Work?


Multiple users can execute tasks simultaneously in the multiprogramming system and they can be stored in the main
memory. If a program is involved in I/O operations, the CPU may give time to different programs in idle mode. While one
program is waiting for an I/O transfer, another program is always ready to use the processor, and many programs may share
CPU time. Not all tasks run concurrently, but many tasks may run concurrently on a single processor, with some other
processes running first, then others, and so on. Consequently, the overall goal of any multiprogramming system is to keep
the CPU busy until some task in the job pool becomes available. Therefore, many programs can be idle on a single-processor
computer without using the CPU.

Operating System (BCS401) Page 13


Advantages
● CPU utilization is high because the CPU never goes to idle state.
● Memory utilization is efficient.
● CPU throughput is high and also supports multiple interactive user terminals.
● It provides less response time.
● It may help to run various jobs in a single application simultaneously.
● It helps to optimize the total job throughput of the computer.
● Various users may use the multiprogramming system at once.
● Short-time jobs are done quickly in comparison to long-time jobs.
● It may help to improve turnaround time for short-time tasks.
● It helps in improving CPU utilization and never gets idle.
● The resources are utilized smartly.

Disadvantages

The disadvantages of multiprogramming operating system are as follows −

● CPU scheduling is compulsory because lots of jobs are ready to run on CPU simultaneously.
● User is not able to interact with jobs when it is executing.
● Programmers also cannot modify a program that is being executed.
● If several jobs are ready in main memory and if there is not enough space for all of them, then
the system has to choose them by making a decision, this processes called job scheduling.
● When the operating system selects a job from the group of jobs and loads that job into memory for execution,
therefore it needs memory management, if several such jobs are ready then it needs CPU scheduling.

Multi-tasking Operating System (Time Sharing Operating System)


Multitasking, in an operating system, is allowing a user to perform more than one computer task (such as the operation of an
application program) at a time. The operating system is able to keep track of where you are in these tasks and go from one to
the other without losing information. Microsoft Windows 2000, IBM's OS/390, and Linux are examples of operating
systems that can do multitasking (almost all of today's operating systems can). When you open your Web browser and then
open Word at the same time, you are causing the operating system to do multitasking. Being able to multitask doesn't mean
that an unlimited number of tasks can be juggled at the same time. Each task consumes system storage and other resources.
As more tasks are started, the system may slow down or begin to run out of shared storage. Multitasking term used in a
modern computer system. It is a logical extension of a multiprogramming system that enables the execution of multiple
Programs simultaneously. In an operating system, multitasking allows a user to perform more than one computer task
simultaneously. Multiple tasks are also known as processes that share similar processing resources like CPU. The operating
system keeps track of where you are in each of these jobs and allows you to transition between them without losing data.
Early operating system could execute various programs at the same time, although multitasking was not fully supported. As
a result, single software could consume the entire CPU of the computer while completing a certain activity. Basic operating
system functions, such as file copying, prevented the user from completing other tasks, such as opening and closing
windows. Fortunately, because modern operating systems have complete multitasking capability, numerous programs can
run concurrently without interfering with one other. In addition, many operating system processes can run at the same time.

Operating System (BCS401) Page 14


Fig:1.2(structure of multitasking os)
Types of Multitasking
There are mainly two types of multitasking. These are as follows:
1. Preemptive Multitasking
2. Cooperative Multitasking
Preemptive Multitasking
Preemptive multitasking is a special task assigned to a computer operating system. It decides how much time one task spends
before assigning another task to use the operating system. Because the operating system controls the entire process, it is
referred to as 'pre-emptive'. Preemptive multitasking is used in desktop operating systems. Unix was the first operating
system to use this method of multitasking. Windows NT and Windows 95 were the first versions of Windows that use
preemptive multitasking. With OS X, the Macintosh acquired proactive multitasking. This operating system notifies
programs when it's time for another program to take over the CPU.
Cooperative Multitasking
The term 'Non-Pre-emptive multitasks’ refers to cooperative multitasking. The main purpose of cooperative multitasking is
to run the present task while releasing the CPU to allow another process to run. This task is carried out by using task YIELD
(). When the task YIELD () function is called, context-switch is executed. Windows and MacOS use cooperative
multitasking. A Windows program will respond to a message by performing some short unit of work before handing the
CPU over to the operating system until the program receives another message. It worked perfectly as long as all programs
were written with other programs in mind and bug-free.

Advantages of Multitasking:

Manage Several Users


This operating system is more suited to supporting multiple users simultaneously; and multiple apps can
run smoothly without interfering with system performance.
Virtual Memory
The greatest virtual memory system is found in multitasking operating systems. Because of virtual
memory, any program does not require a long wait time to complete its tasks; if this problem arises, those
programs are moved to virtual memory.
Good Reliability
Multitasking operating systems give more flexibility to several users, and they are happier as result. On which
each user can execute single or multiple programs simultaneously.
Secured Memory
The multitasking operating systems have well-defined memory management. Due to this operating
system does not allow any types of permissions for undesirable programs to waste memory.
Time Shareable
All tasks are allotted a specified amount of time so that they do not have to wait for the CPU.

Background Processing
A multitasking operating system provides a better environment for background processes to run. These background
programs are not visible to most users, but they help other programs like firewalls, antivirus software, and others run well.

Operating System (BCS401) Page 15


Optimize the computer resources
A multitasking operating system may manage various computer resources like I/O devices, RAM hard disk, CPU, and
others.
Use Several Programs
Users can run many programs simultaneously, like an internet browser, games, MS Excel, PowerPoint, and other utilities.

Disadvantages of Multitasking:
Processor Boundation
The system may run programs slowly because of the poor speed of their processors, and their reaction time might rise when
processing many programs. To solve this problem, more processing power is required.
Memory Boundation
The computer's performance may get slow due to the multiple programs running at the same time because the main memory
gets overloaded while loading multiple programs. Because the CPU is unable to provide different times for each program,
reaction time increases. The primary cause of this issue is that it makes use of low-capacity RAM. As a result, the RAM
capacity can be raised to provide a solution.
CPU Heat Up
The multiple processors are busier at the same time to complete any task in a multitasking environment, so the CPU
generates more heat.

Multi-User Operating System


A multi-user operating system is an operating system that permits several users to access a single system running to a single
operating system. These systems are frequently quite complex, and they must manage the tasks that the various users
connected to them require. Users will usually sit at terminals or computers connected to the system via a network and other
system machines like printers. A multi- user operating system varies from a connected single-user operating system in that
each user accesses the same operating system from different machines. The main goal of developing a multi-user operating
system is to use it for time- sharing and batch processing on mainframe systems. This multi-user operating system is now
often used in large organizations, the government sector, educational institutions like large universities, and on servers' side
such as Ubuntu Server or Windows Server. These servers allow several users to access the operating system, kernel, and
hardware at the same time. It is usually responsible for handling memory and processing for other running programs,
identifying and using system hardware, and efficiently handling user interaction and data requests. It’s especially important
for an operating system, a multi-user operating system because several users rely on the system to function properly at the
same time.

Components of Multi-User Operating System

Memory: The physical memory present inside the system is where storage occurs. It is also known as Random Access
Memory (RAM). The system may rectify the data that is present in the main memory. So, every executed program should
be copied from physical storage like a hard disk. Main memory is determined as an important part of the OS because it
specifies how many programs may be executed simultaneously.
Kernel: A multi-user operating system makes use of the Kernel component, which is built in a low-level language. This
component is embedded in the computer system's main memory and may interact directly with the system's H/W.
Processor: The CPU (Central Processing Unit) of the computer is sometimes known as the computer’s brain. In large
machines, the CPU would necessitate more ICS. On smaller computers, the CPU is mapped in a single chip known as a
microprocessor.
User Interface: The user interface is the way of interaction between users and all software and hardware processes. It
enables the users to interact with the computer system in a simple manner.
Device Handler: Each input and output device needs its device handler. The device handler's primary goal is to provide all
requests from the whole device request queue pool. The device handler operates in continuous cycle mode, first discarding
the I/O request block from the queue side.
Spooler: Spooler stands for 'Simultaneous Peripheral Output on Line'. The Spooler runs all computer processes and outputs
the results at the same time. Spooling is used by a variety of output devices, including printers.

Operating System (BCS401) Page 16


Types of Multi-User Operating System
Distributed System: A distributed system is also known as distributed computing. It is a collection of multiple
components distributed over multiple computers that interact, coordinate, and seem like a Single coherent system to the
end-user. With the aid of the network, the end-user would be able to interact with or operate them. The entire system in the
distributed operating system is a network through which the end-users communicate or operate. Distributed operating
system, also called distributed Computing, is a compilation of multiple components. The components are distributed over
multiple computers to help the end-user interact and coordinate like a single coherent system.

Time-Sliced Systems: It's a system in which each user's job gets a specific amount of CPU time. In other words, each work
is assigned to a specific time period. These time slices look too small to the user's eyes. An internal component known as the
'Scheduler' decides to run the next job. This scheduler determines and executes the job that must perform based on the
priority cycle. It is a system where each user task is assigned a short period of CPU time. The CPU time gets divided into
time slices where each slice is too small for the user. This method of dividing the CPU time is known as time slicing. Time
Slicing is a scheduling algorithm also called Round Robin Scheduling. It gives equal opportunity to all the processes
running in the system to use CPU time.

Multiprocessor System: Multiple processors are used in this system, which helps to improve overall performance. If one of
the processors in this system fails, the other processor is responsible for completing its assigned task. Multiprocessor systems
are systems that use multiple processors at the same time. Using multiple processors increases the system performance as all
the processors run side by side. It works at a pace that is faster than the single-processor operating system. In a
multiprocessor system, if one processor fails, another processor completes its assigned tasks.

Features of the Multi-user Operating System

Multi-tasking - A multi-user operating system can perform multiple programs simultaneously.


Resource sharing - A multi-user operating system can share multiple peripherals or resources, such as printers, hard drives,
fax machines, plotters, etc. This feature helps to share files, documents, and data among users. This feature maps to time-
slicing, where a tiny slice of CPU time gets allocated to all users.
Background processing - A multi-user operating system can process tasks in the backend if they are not allowed to
process in the front. It also allows simultaneous processing and interaction of programs with the system.

Example of Multi-user Operating System


● Mac OS X,
● Windows 1010,
● Linux,
● Unix,
● Ubuntu

What are the Advantages of the Multi-user Operating System


Avoids Disruption: A multi-user operating system has multiple computers and devices operating and running on the
same network. Thus, the damage to one computer in the network does not affect others. Thus it avoids disruption, which is
the most significant advantage of a multi-user operating system.
Distribution of Resources: One user can share the file they are working on to be visible to other users. Thus, any user
who requires it can access the file whenever they want. For example, if a user wants to view the ppt file of some other
user, the user working on it can share it so that other users can access it.
Used in Airlines, Railways, and Buses: The ticket reservation system uses a multi-user operating system wherein
multiple users can log in, book a ticket, cancel a ticket, and check the availability or the status of the booked ticket
simultaneously.
Backing up of Data: The multi-user operating system makes the backing up of data easier as it gets done on the machine
used by the user. Stability of servers: The multi-user operating system provides remote access to servers from all
countries in different time zones. The up- gradation of hardware and software with the latest technologies makes the
server systematic and stable.

Operating System (BCS401) Page 17


Disadvantages of the Multi-user Operating System

Virus: In the multi-user operating system, if a virus gets into a single network of computers, it will pave the way for the virus to
affect all the computers in the network.
Visibility of data: Privacy of data and information becomes a concern as all the information in the computers gets shared in
public.
Multiple accounts: Multiple accounts on a single computer may not be suitable for all users. Thus, it is better to have multiple
PCs for each user.

Multiprocessing Operating System


In operating systems, to improve the performance of more than one CPU can be used within one computer system called
Multiprocessor operating system.
Multiple CPUs are interconnected so that a job can be divided among them for faster execution. When a job finishes, results from
all CPUs are collected and compiled to give the final output. Jobs needed to share main memory and they may also share other
system resources among themselves. Multiple CPUs can also be used to run multiple jobs simultaneously. Multiprocessor operating
systems are used in operating systems to boost the performance of multiple CPUs within a single computer system. Multiple CPUs
are linked together so that a job can be divided and executed more quickly.

Fig:1.3(multiprocessor system working)

For Example: UNIX Operating system is one of the most widely used multiprocessing systems.
To employ a multiprocessing operating system effectively, the computer system must have the following things:

● A motherboard is capable of handling multiple processors in a multiprocessing operating system.


● Processors are also capable of being used in a multiprocessing system.

Symmetrical Multiprocessing Operating System


In a Symmetrical multiprocessing operating system, each processor executes the same copy of operating system every time.
Each process makes its own decisions and works according to all other process to make sure that system works efficiently.
With the help of CPU scheduling algorithms, the task is assigned to the CPU that has least burden. Symmetrical
multiprocessing operating system is also known as “Shared Everything System” because all the processors share memory and

input-output bus. Below image describes a symmetric multiprocessing operating system.

Operating System (BCS401) Page 18


Advantages

● Failure of one processor does not affect the functioning of other processors.
● It divides all the workload equally to the available processors.
● Make use of available resources efficiently.

Disadvantages

● Symmetric multiprocessing OS are more complex.


● They are more costly.
● Synchronization between multiple processors is difficult.

Asymmetrical Multiprocessing Operating System


In an Asymmetrical multiprocessing operating system one processor acts as a master whereas the remaining all
processors act as slaves. Slave processors are assigned with ready to execute processes by the master processor. A
ready queue is being maintained by the master processor to provide processes for slaves. In a multiprocessing
operating system a scheduler is created by a master process that assigns processes to be executed to slave processors.
Below diagram describes the asymmetrical multiprocessing operating system.

Advantages

● Asymmetrical multiprocessing operating systems are cost-effective.


● They are easy to design and manage.
● They are more scalable.

Disadvantages

● There can be uneven distribution of workload among the processors.


● The processors do not share the same memory.

Features Symmetric Multiprocessing Asymmetric Multiprocessing

Operating System (BCS401) Page 19


Definition Symmetric multiprocessing occurs The processing of programs by
when many processors work several processors in a master-slave
together to Arrangement is known as asymmetric
process programs using the same OS and multiprocessing.
memory.

Basic Each CPU executes the The Master processor only carries out the
OS operations. OS functions.

Ease Symmetric Multiprocessors are difficult to The master processor has access to the
understand since all of the processors data structure.
must be synchronized to maintain load
balance.

Processor All processors use a common ready The master processor assigns the
queue, or each may have its private slave processors processes, or they
ready queue. have some predefined tasks.

Communica Shared memory allows all processors Processors do not need to


tion to communicate with one another. communicate because the
master processor controls them.

Architecture SMP processors all have the same Asymmetric Multiprocessing


architecture. processors can have the same or
different architecture.

Failure When a CPU Fails, the If a master processor fails, control is passed
system's computing capacity to a slave processor. If a slave processor
decreases. fails, the task is passed to different
processor.

Cost It is costly in comparison to It is cheaper than Symmetric


Asymmetric Multiprocessing.
Multiprocessing.

Operating System (BCS401) Page 20


Lecture-3
3.1Classification of operating

system Real Time Operating

System:
(RTOS) are used in environments where a large number of events, mostly external to the computer system, must be accepted
and
Processed in a short time or within certain deadlines. such applications are industrial control, telephone switching equipment,
flight control, and real-time simulations. With an RTOS, the processing time is measured in tenths of seconds. This system is
time-bound and has a fixed deadline. The processing in this type of system must occur within the specified constraints.
Otherwise, this will lead to system failure.
Examples of the real-time operating systems:

● Airline traffic control systems,


● Command Control Systems,
● Airlines reservation system,
● Heart Pacemaker,
● Network Multimedia Systems,
● Robot etc.

Fig:1.4(Real time os)


The real-time operating systems can be of 3 types
Hard Real-Time operating system: These operating systems guarantee that critical tasks are completed within a range
of time. For example, a robot is hired to weld a car body. If the robot welds too early or too late, the car cannot be sold,
so it is a hard real-time system that requires complete car welding by robot hardly on the time., scientific experiments,
medical imaging systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
Soft real-time operating system: This operating system provides some relaxation in the time limit. For example –
Multimedia systems, digital audio systems etc. Explicit, programmer-defined and controlled processes are encountered
in real-time systems. A separate process is changed with handling a single external event. The process is activated upon
occurrence of the related event signaled by an interrupt. Multitasking operation is accomplished by scheduling processes
for execution independently of each other. Each process is assigned a certain level of priority that corresponds to the
relative importance of the event that it services. The processor is allocated to the highest priority processes. This type of
schedule, called priority-based preemptive scheduling, is used by real-time systems.

Firm Real-time Operating System: RTOS of this type have to follow deadlines as well. In spite of its small impact,
missing a deadline can have unintended consequences, including a reduction in the quality of the product. Example:
Multimedia applications.

Advantages:
Maximum consumption –Maximum utilization of devices and systems. Thus more output from all the resources. Task
Shifting –Time assigned for shifting tasks in these systems is very less. For example, in

Operating System (BCS401) Page 21


Focus On Application –Focus on running applications and less importance to applications that are in the queue. Real-
Time Operating System In Embedded System –Since the size of programs is small, RTOS can also be Embedded
systems like in transport and others.
Error Free -These types of systems are error-free.
Memory Allocation –Memory allocation is best managed in these types of systems.

Disadvantages:

Limited Tasks –Very few tasks run simultaneously, and their concentration is very less on few applications to avoid
errors.
Use Heavy System Resources –Sometimes the system resources are not so good and they are expensive as well.
Complex Algorithms –The algorithms are very complex and difficult for the designer to write on.
Device Driver and Interrupt signals –It needs specific device drivers and interrupts signals to respond earliest to
interrupts.
Thread Priority –It is not good to set thread priority as these systems are very less prone to switching tasks.
Minimum Switching – RTOS performs minimal task switching.

Interactive Operating System


Interactive operating systems are computers that accept human inputs. Users give commands or some data to the computers by
typing or by gestures. Some examples of interactive systems include MS Word and Spreadsheets, etc. They facilitate interactive
behavior. Mac and Windows OS are some examples of interactive operating systems. An interactive operative system is an
operating system that allows the execution of interactive programs. All PC operating systems are interactive operating systems
only. An interactive operating system gives permission to the user to interact directly with the computer. In an Interactive
operating system, the user enters some command into the system and the work of the system is to execute it. Programs that allow
users to enter some data or commands are known as Interactive Operating Systems. Some commonly used examples of
Interactive operating systems include Word Processors and Spreadsheet Applications. A non-interactive program can be defined
as one that once started continues without the need for human interaction. A compiler can be an example of a non- interactive
program.

Properties of Interactive Operating System:

Batch Processing: It is defined as the process of gathering programs and data together in a batch before performing them. The
job of the operating system is to define the jobs as a single unit by using some already defined sequence of commands or data,
etc.Before they are performed or carried out, these are stored in the memory of the system and their processing depends on a
FIFO basis. The operating system releases the memory and then copies the output into an output spool for later printing when the
job is finished. Its use is that it basically improves the system performance because a new job begins only when the old one is
completed without any interference from the user. One disadvantage is that there is a small chance that the jobs will enter an
infinite loop. Debugging is also somewhat difficult with batch processing.
Multitasking: The CPU can execute many tasks simultaneously by switching between them. This is known as Time- Sharing
System and also it has a very fast response time. They switch so fastly that the users can very easily interact with each running
program.
Multiprogramming: Multiprogramming happens when the memory of the system stores way too many processes. The job of the
operating system here is to run these processes in parallel on the same processor. Multiple processes share the CPU, thus
increasing CPU utilization. Now, the CPU only performs one job at a particular time while the rest wait for the processor to be
assigned to them. The operating system takes care of the fact that the CPU is never idle by using its memory management
programs so that it can monitor the state of all system resources and active programs. One advantage of this is that it gives the
user the feeling that the CPU is working on multiple programs simultaneously.
Real-Time System: Dedicated embedded systems are real-time systems. The main job of the operating system here is to read and
react to sensor data and then provides a response in a fixed time period, therefore, ensuring good performance.
Distributive Environment: A distributive environment consists of many independent processors. The job of the operating
system here is to distribute computation logic among the physical processors and also at the same time manage communication
between them .Each processor has its own local memory, so they do not share a memory.
Interactivity: Interactivity is defined as the power of a user to interact with the system. The main job of the operating system
here is that it basically provides an interface for interacting with the system, manages I/O devices, and also ensures a fast
response time.
Spooling: Spooling is defined as the process of pushing the data from different I/O jobs into a buffer or somewhere in the

Operating System (BCS401) Page 22


memory so that any device can access the data when it is ready. The operating system here handles the I/O device data spooling
because the devices have different data access rates in order to maintain the spooling buffer.
Advantages of Interactive Operating System:

Usability: An operating system is designed to perform something and the instructiveness allows the user to manage the tasks
more or less in real-time. Security: Simple security policy enhancement. In non-interactive systems, the user virtually always
knows what their programs will do during their lifetime, thus allowing us to forecast and correct the bugs.

Disadvantages of Interactive Operating System:

Tough to design: Depending on the target device, interactivity might be proved challenging to design because the user must be
prepared for every input. What about having many inputs? The state of a program can alternate at any particular time, all the
programs should be handled in some way, and also it doesn’t always work out properly.
Example of an Interactive Operating System:
● Unix Operating System
● Disk Operating System
What is a Multithreading Operating System?
A multithreaded operating system is an operating system that supports multiple threads of execution within a single process.
Threads are lightweight processes that share the same memory space, allowing for more efficient concurrent execution compared
to traditional heavyweight processes. In a multithreaded operating system, each thread within a process can execute
independently, performing different tasks simultaneously. This allows for better utilization of system resources such as CPU time
and memory, as well as improved responsiveness and throughput for applications.
Multithreading can provide several advantages, including:
Concurrency: Multiple threads can execute concurrently within a single process, allowing for better responsiveness and
improved performance, especially on multi-core processors.
Resource Sharing: Threads within the same process share resources such as memory and file descriptors, reducing overhead
compared to separate processes.
Simplified Programming: Multithreading can simplify programming by allowing developers to write concurrent code more
easily than with processes, as threads within the same process can communicate more directly and efficiently.
Efficient Communication: Threads within the same process can communicate through shared memory, message passing, or
other inter-thread communication mechanisms, allowing for efficient data exchange.
Multithreading Model:
Multithreading allows the application to divide its task into individual threads. In multi-threads, the same process or task can be
done by the number of threads, or we can say that there is more than one thread to perform the task in multithreading. With the
use of multithreading, multitasking can be achieved.
The main drawback of single threading systems is that only one task can be performed at a time, so to overcome the drawback of
this single threading, there is multithreading that allows multiple tasks to be performed.

Operating System (BCS401) Page 23


For example:

Fig:1.5(ex of multithreading,running of several task at a time)


In the above example, client1, client2, and client3 are accessing the web server without any waiting. In
multithreading, several tasks can run at the same time.

In an operating system, threads are divided into the user-level thread and the Kernel-level thread. User-level threads
handled independent form above the kernel and thereby managed without any kernel support. On the opposite hand,
the operating system directly manages the kernel-level threads. Nevertheless, there must be a form of relationship
between user-level and kernel-level threads.

There exists three established multithreading models classifying these relationships are:

○ Many to one multithreading model


○ One to one multithreading model
○ Many to Many multithreading models

Many to one multithreading model:

The many to one model maps many user levels threads to one kernel thread. This type of relationship facilitates an
effective context-switching environment, easily implemented even on the simple kernel with no thread support.
The disadvantage of this model is that since there is only one kernel-level thread schedule at any given time, this
model cannot take advantage of the hardware acceleration offered by multithreaded processes or multi-processor
systems. In this, all the thread management is done in the user space. If blocking comes, this model blocks the whole
system.
In the below figure, the many to one model associates all user-level threads to single kernel-level threads.

Operating System (BCS401) Page 24


One to one multithreading model

The one-to-one model maps a single user-level thread to a single kernel-level thread. This type of relationship
facilitates the running of multiple threads in parallel. However, this benefit comes with its drawback. The generation
of every new user thread must include creating a corresponding kernel thread causing an overhead, which can
hinder the performance of the parent process. Windows series and Linux operating systems try to tackle this problem
by limiting the growth of the thread count.

In the above figure, one model associates that one user-level thread to a single kernel-level thread
Many to Many Model multithreading model

In this type of model, there are several user-level threads and several kernel-level threads. The number of kernel
threads created depends upon a particular application. The developer can creates many threads at both levels but may
not be the same. The many to many model is compromise between the other two models. In this model, if any thread
makes a blocking system call, the kernel can schedule another thread for execution. Also, with the introduction of
multiple threads, complexity is not present as in the previous models. Though this model allows the creation of
multiple kernel threads, true concurrency cannot be achieved by this model. This is because the kernel can schedule
only one process at a time.
Many to many versions of the multithreading model associate several user-level threads to the same or much less
variety of kernel-level threads in the below figure.

Operating System (BCS401) Page 25


Head-to-head comparison between the User level threads and Kernel level threads

Features User Level Threads Kernel Level Threads

Implemented by It is implemented by the users. It is


implemented by the OS.

Context switch Its time is less. Its time is more.


time

Multithreading Multithread applications are unable to It may be multithreaded.


employ multiprocessing in user-level
threads.

Implementation It is easy to implement. It is complicated to implement.

Blocking If a thread in the kernel is blocked, it If a thread in the kernel is blocked, it does not block
Operation blocks all other threads in the same all other threads in the same process.
process.

Recognize OS doesn't recognize it. It is recognized by OS.

Thread Its library includes the source code for The application code on kernel-level threads does
Management thread creation, data transfer, thread not include thread management code, and it is
destruction, message passing, and thread simply an API to the kernel mode.
scheduling.

Hardware It doesn't need hardware support. It requires hardware support.


Support

Creation and It may be created and managed much It takes much time to create and handle.
Management faster.

Examples Some instances of user-level threads are Some instances of Kernel-level threads are Windows
Java threads and POSIX threads. and Solaris.

Operating System Any OS may support it. The specific OS may support it.

Operating System (BCS401) Page 26


Lecture-4

What are Operating System Services?


The operating system provides the programming environment in which a programmer works computer system. The user
program requests various resources through the operating system.
The operating system gives several services to utility programmers and users. Applications access these services
through application programming interfaces or system calls.
By invoking those interfaces, the application can request a service from the operating system, pass parameters, and
acquire the operation outcomes.
User interface. Almost all operating systems have a user interface (UI). This interface can take several forms. One is a
command-line interface (CLI), which uses text commands and a method for entering them (say, a keyboard for typing in
commands in a specific format with specific options). Another is a batch interface, in which commands and directives to
control those commands are entered into files, and those files are executed. Most commonly, a graphical user interface
(GUI) is used. Here, the interface is a window system with a pointing device to direct I/O, choose from menus, and make
selections and a keyboard to enter text. Some systems provide two or all three of these variations.
Program execution. The system must be able to load a program into memory and to run that program. The program must
be able to end its execution, either normally or abnormally (indicating error).
I/O operations. A running program may require I/O, which may involve a file or an I/O device. For specific devices, special
functions may be desired (such as recording to a CD or DVD drive or blanking a display screen). For efficiency and
protection, users usually cannot control I/O devices directly. Therefore, the operating system must provide a means to do
I/O. File-system manipulation. The file system is of particular interest. Obviously, programs need to read and write files and
directories. They also need to create and delete them by name, search for a given file, and list file information. Finally, some
operating systems include permissions management to allow or deny access to files or directories based on file ownership.
Many operating systems provide a variety of file systems, sometimes to allow personal choice and sometimes to provide
specific features or performance characteristics.
File-system manipulation. The file system is of particular interest. Obviously, programs need to read and write files and
directories. They also need to create and delete them by name, search for a given file, and list file information. Finally, some
operating systems include permissions management to allow or deny access to files or directories based on file ownership.
Many operating systems provide a variety of file systems, sometimes to allow personal choice and sometimes to provide
specific features or performance characteristics.
Communications. There are many circumstances in which one process needs to exchange information with another process.
Such communication may occur between processes that are executing on the same computer or between processes that are
executing on different computer systems tied together by a computer network. Communications may be implemented via
shared memory, in which two or more processes read and write to a shared section of memory, or message passing, in which
packets of information in predefined formats are moved between processes by the operating system.
Error detection. The operating system needs to be detecting and correcting errors constantly. Errors may occur in the CPU
and memory hardware (such as a memory error or a power failure), in I/O devices (such as a parity error on disk, a
connection failure one network, or lack of paper in the printer), and in the user program (such as an arithmetic overflow, an
attempt to access an illegal memory location, or a too-great use of CPU time). For each type of error, the operating system
should take the appropriate action to ensure correct and consistent computing. Sometimes, it has no choice but to halt the
system. At other times, it might terminate an error-causing process or return an error code to a process for the process to
detect and possibly correct.
Resource allocation. When there are multiple users or multiple jobs running at the same time, resources must be allocated
to each of them. The operating system manages many different types of resources. Some (such as CPU cycles, main
memory, and file storage) may have special allocation code, whereas others (such as I/O devices) may have much more
general request and release code. For instance, in determining how best to use the CPU, operating systems have CPU-
scheduling routines that take into account the speed of the CPU, the jobs that must be executed, the number of registers
available, and other factors. There may also be routines to allocate printers, USB storage drives, and other peripheral
devices.
Protection and security. The owners of information stored in a multiuser or networked computer system may want to
control use of that information. When several separate processes execute concurrently, it should not be possible for one
process to interfere with the others or with the operating system itself. Protection involves ensuring that all access to system
resources is controlled. Security of the system from outsiders is also important. Such security starts with requiring each user
Operating System (BCS401) Page 27
to authenticate him or herself to the system, usually by means of a password, to gain access to system resources. It extends to
defending external I/O devices, including network adapters, from invalid access attempts and to recording all such
connections for detection of break-ins. If a system is to be protected and secure, precautions must be instituted throughout it.
Accounting. We want to keep track of which users use how much and what kinds of computer resources. This record
keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics. Usage statistics
may be valuable tool for researchers who wish to reconfigure the system to improve computing services.

A view of operating system services.

Fig:1.6(view of operating system services)

Operating System (BCS401) Page 28


Lecture-5

5.1 Operating System Structure


A system as large and complex as a modern operating system must be engineered carefully if it is to function
properly and be modified easily. A common approach is to partition the task into small components, or modules,
rather than have one monolithic system. Each of these modules should be a well-defined portion of the system, with
carefully defined inputs, outputs, and functions. In this section, we discuss how these components are interconnected
and melded into kernel.
Simple Structure:
Many operating systems do not have well-defined structures. Frequently, such systems started as small, simple, and
limited systems and then grew beyond their original scope. MS-DOS is an example of such a system. It was
originally designed and implemented by a few people who had no idea that it would become so popular. It was
written to provide the most functionality in the least space, so it was not carefully divided into modules.
In MS-DOS, the interfaces and levels of functionality are not well separated. For instance, application programs are
able to access the basic I/O routines to write directly to the display and disk drives. Such freedom leaves MS-DOS
vulnerable to errant (or malicious) programs, causing entire system crashes when user programs fail. Of course,
MS-DOS was also limited by the hardware of its era.
Because the Intel 8088 for which it was written provides no dual mode and no hardware protection, the designers o
MS-DOS had no choice but to leave the base hardware accessible.
MS-DOS layer structure
Advantages of Simple structure:

● It delivers better application performance because of the few interfaces between the application program and the
hardware.
● Easy for kernel developers to develop such an operating system.
● It can perform the fundamental operation
● It uses straightforward commands

Disadvantages of Simple structure:

● The structure is very complicated as no clear boundaries exist between modules.


● It does not enforce data hiding in the operating system.
● Limited ability

● Lack of flexibility
● Layered Approach
In a layered approach, the OS consists of several layers where each layer has a well-defined functionality and each layer
is designed, coded and tested independently.
The layered structure approach breaks up the operating system into different layers and retains much more control on the
system. The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface. These layers
are so designed that each layer uses the functions of the lower-level layers only. It simplifies the debugging process
as if lower- level layers are debugged, and an error occurs during debugging. The error must be on that layer only as
the lower-level layers have already been debugged.
This allows implementers to change the inner workings and increases modularity.
As long as the external interface of the routines doesn't change, developers have more freedom to change the inner
workings of the routines.
The main advantage is the simplicity of construction and debugging. The main difficulty is defining the various layers.
The main disadvantage of this structure is that the data needs to be modified and passed on at each layer, which adds
overhead to the system. Moreover, careful planning of the layers is necessary as a layer can use only lower-level
layers. UNIX is an example of this structure.
Layering provides a distinct advantage in an operating system. All the layers can be defined separately and interact with
each other as required. Also, it is easier to create, maintain and update the system if it is done in the form of layers.
Change in one layer specification does not affect the rest of the layers.

Operating System (BCS401) Page 29


Each of the layers in the operating system can only interact with the above and below layers. The lowest layer handles
the hardware, and the uppermost layer deals with the user applications.

Fig:1.7(layered architecture)

Architecture of Layered Structure

This type of operating system was created as an improvement over the early monolithic systems. The operating system is
split into various layers in the layered operating system, and each of the layers has different functionalities. There are some
rules in the implementation of the layers as follows.
A particular layer can access all the layers present below it, but it cannot access them. That is, layer n-1 can access all the
layers from n-2 to 0, but it cannot access the n th.
Layer 0 deals with allocating the processes, switching between processes when interruptions occur or the timer expires. It
also deals with the basic multiprogramming of the CPU.Thus if the user layer wants to interact with the hardware layer, the
response will be traveled through all the layers from n-1 to 1. Each layer must be designed and implemented such that it will
need only the services provided by the layers.
There are six layers in the layered operating system. A diagram demonstrating these layers is as follows:

Fig:1.8(view of layered architecture)


Hardware: This layer interacts with the system hardware and coordinates with all the peripheral devices used, such as a
printer, mouse, keyboard, scanner, etc. These types of hardware devices are managed in the hardware layer.
The hardware layer is the lowest and most authoritative layer in the layered operating system architecture. It is attached
directly to the core of the system.

Operating System (BCS401) Page 30


CPU Scheduling: This layer deals with scheduling the processes for the CPU. Many scheduling queues are used to handle
processes. When the processes enter the system, they are put into the job queue.
The processes that are ready to execute in the main memory are kept in the ready queue. This layer is responsible for
managing how many processes will be allocated to the CPU and how many will stay out of the CPU.

Memory Management: Memory management deals with memory and moving processes from disk to primary memory for
execution and back again. This is handled by the third layer of the operating system. All memory management is associated
with this layer. There are various types of memories in the computer like RAM, ROM.
If you consider RAM, then it is concerned with swapping in and swapping out of memory. When our computer runs, some
processes move to the main memory (RAM) for execution, and when programs, such as calculators, exit, it is removed from
the main memory.
Process Management: This layer is responsible for managing the processes, i.e., assigning the processor to a process and
deciding how many processes will stay in the waiting schedule. The priority of the processes is also managed in this layer.
The different algorithms used for process scheduling are FCFS (first come, first served), SJF (shortest job first), priority
scheduling, round-robin scheduling, etc.

Advantages of Layered Structure

There are several advantages of the layered structure of operating system design, such as:
Modularity: This design promotes modularity as each layer performs only the tasks it is scheduled to perform.

Easy debugging: As the layers are discrete so it is very easy to debug. Suppose an error occurs in the CPU scheduling layer.
The developer can only search that particular layer to debug, unlike the Monolithic system where all the services are present.

Easy update: A modification made in a particular layer will not affect the other layers.

No direct access to hardware: The hardware layer is the innermost layer present in the design. So a user can use the
services of hardware but cannot directly modify or access it, unlike the Simple system in which the user had direct access to
the hardware.

Abstraction: Every layer is concerned with its functions. So the functions and implementations of the other layers are
abstract to it.

Disadvantages of Layered Structure

Though this system has several advantages over the Monolithic and Simple design, there are also some disadvantages, such
as:
Complex and careful implementation: As a layer can access the services of the layers below it, so the arrangement of the
layers must be done carefully. For example, the backing storage layer uses the services of the memory management layer. So
it must be kept below the memory management layer. Thus with great modularity comes complex implementation.

Slower in execution: If a layer wants to interact with another layer, it requests to travel through all the layers present
between the two interacting layers. Thus it increases response time, unlike the Monolithic system, which is faster than this.
Thus an increase in the number of layers may lead to a very inefficient design.

Functionality: It is not always possible to divide the functionalities. Many times, they are interrelated and can't be separated.

Communication: No communication between non-adjacent layers.

Lecture-6

6.1 What is Kernel ?


Kernel central component of an operating system that manages operations of computer and hardware. It basically manages

Operating System (BCS401) Page 31


operations of memory and CPU time. It is the core component of an operating system.
Kernel acts as a bridge between applications and data processing performed at hardware level using inter-process
communication and system calls.
Kernel loads first into memory when an operating system is loaded and remains into memory until the operating system is
shut down again. It is responsible for various tasks such as disk management, task management, and memory management.
Kernel has a process table that keeps track of all active processes. Process table contains a per process region table whose
entry points to entries in the region table.
Kernel loads an executable file into memory during ‘exec’ system call ‘It decides which process should be allocated to the
processor to execute and which process should be kept in main memory to execute.
It basically acts as an interface between user applications and hardware. The major aim of the kernel is to manage
communication between software i.e. user-level applications and hardware i.e., CPU and disk memory.

Objectives of Kernel:

● To establish communication between user level application and hardware.


● To decide the state of incoming processes.
● To control disk management.
● To control memory management.
● To control task management.
Features of Kernel
● Inter-process communication
● Context switching
● Low-level scheduling of processes
● Process synchronization

The kernel handles the following:


● Resource management
● Device management
● Memory management
● CPU/GPU
● Input/output device
● System calls
● Memory
Microkernel
The microkernel is one of the kernel's classifications. Being a kernel, it handles all system resources. On the other hand,
the user and kernel services in a microkernel are implemented in distinct address spaces. User services are kept in user
address space, while kernel services are kept in kernel address space. It aids in reducing the kernel and OS's size.
It provides a minimal amount of process and memory management services. The interaction between the client
application and services running in user address space is established via message passing that helps to reduce the speed
of microkernel execution. The OS is unaffected because kernel and user services are isolated, so if any of the user
services fail, the kernel
Service is unaffected. It is extendable because new services are added to the user address space, hence requiring no
changes in kernel space. It's also lightweight, secure, and reliable.
Microkernel and their user environments are typically used in C++ or C languages with a little assembly.
On the other hand, other implementation programming languages may be possible with some high-level code.
Example: Mach OS, Eclipse IDE
Architecture of Microkernel
A microkernel is minimum needed software required to implement an operating system correctly. Memory, process
scheduling methods, and fundamental inter-process communication are all included.
The microkernel includes basic needs like process scheduling mechanisms, memory, and interposes communication. It is
the only program that executes at the privileged level, i.e., kernel mode. The OS's other functions are moved from the
kernel-mode and executed in the user mode.

Operating System (BCS401) Page 32


The microkernel ensures that the code may be easily controlled because the services are split in the user space. It means
some code runs in the kernel mode, resulting in improved security and stability.
The microkernel is entirely responsible for the operating system's most significant services, which are as follows:
o Inter-Process Communication
o Memory Management
o CPU Scheduling

Inter-Process Communication
Interposes communication refers to how processes interact with one another. A process has several threads. In the kernel
space, threads of any process interact with one another. Messages are sent and received across threads using ports. At the
kernel level, there are several ports like process port, exceptional port, bootstrap port, and registered port. All of these
ports interact with user-space processes.
Memory Management
Memory management is the process of allocating space in main memory for processes. However, there is also the
creation of virtual memory for processes. Virtual memory means that if a process has a bigger size than the main
memory, it is partitioned into portions and stored. After that, one by one, every part of the process is stored in the main
memory until the CPU executes it.
CPU Scheduling
CPU scheduling refers to which process the CPU will execute next. All processes are queued and executed one at time.
Every process has a level of priority, and the process with the highest priority is performed out first. CPU scheduling
aids in optimizing CPU utilization. In addition, resources are being used more efficiently. It also minimizes the waiting
time. Waiting time shows that a process takes less time in the queue and that resources are allocated to the process more
quickly. CPU scheduling also reduces response and turnaround times.
Components of Microkernel

Fig:1.8(microkernel operating system)

A microkernel contains only the system's basic functions. A component is only included in the microkernel if
putting it outside would disrupt the system's operation. The user mode should be used for all other non- essential
components. The minimum functionalities needed in the microkernel are as follows:
● In the microkernel, processor scheduling algorithms are also required. Process and thread schedulers are included.
● Address spaces and other memory management mechanisms should be incorporated in the microkernel.
Memory protection features are also included.
● Inter-process communication (IPC) is used to manage servers that execute their own address spaces.

Advantages

Operating System (BCS401) Page 33


● Microkernels are secure since only those parts are added, which might disturb the system’s functionality.
● Microkernels are modular, and the various modules may be swapped, reloaded, and modified without
affecting the kernel.
● Microkernel architecture is compact and isolated, so it may perform better.
● The system expansion is more accessible, so it may be introduced to the system application without disrupting the
kernel.
● When compared to monolithic systems, microkernels have fewer system crashes. Furthermore, due to the
modular structure of the microkernel, any crashes that do occur are simply handled.
● The microkernel interface helps in enforcing a more modular system structure.
● Server failure is treated the same as any other user program failure.
● It adds new features without recompiling.
● Size is smaller
● Easy to extend
● Easy to port
● Less prone to errors and bugs

Disadvantages

● When the drivers are implemented as procedures, a context switch or a function call is needed.
● In a microkernel system, providing services are more costly than in a traditional monolithic system.
● The performance of a microkernel system might be indifferent and cause issues.
● Execution is slower

What is Monolithic kernel


The monolithic operating system is a very basic operating system in which file management, memory management,
device management, and process management are directly controlled within the kernel. The kernel can access all the
resources present in the system. In monolithic systems, each component of the operating system is contained within
the kernel. Operating systems that use monolithic architecture were first used in the 1970s.
The monolithic operating system is also known as the monolithic kernel. This is an old operating system used to
perform small tasks like batch processing and time-sharing tasks in banks. The monolithic kernel acts as a virtual
machine that controls all hardware parts.
It is different from a microkernel, which has limited tasks. A microkernel is divided into two
parts, kernel space, and user space. Both parts communicate with each other through IPC (Inter-process
communication). Microkernel's advantage is that if one server fails, then the other server takes control of it.
A monolithic kernel is an operating system architecture where the entire operating system is working in kernel
space. The monolithic model differs from other operating system architectures, such as the microkernel architecture,
in that it alone defines a high-level virtual interface over computer hardware. A set of primitives or system calls
implement all operating system services such as process management, concurrency, and memory management.
Device drivers can be added to the kernel as modules..
Monolithic Kernel Components

Operating System (BCS401) Page 34


Fig:1.9(Monolithic kernel system)

A monolithic design of the operating system architecture makes no special accommodation for the special nature of
the operating system. Although the design follows the separation of concerns, no attempt is made to restrict the
privileges granted to the individual parts of the operating system. The entire operating system executes with
maximum privileges. The communication overhead inside the monolithic operating system is the same as that of any
other software, considered relatively low.
CP/M and DOS are simple examples of monolithic operating systems. Both CP/M and DOS are operating systems
that share a single address space with the applications. In CP/M, the 16-bit address space starts with system
variables and the application area. It ends with three parts of the operating system, namely CCP (Console
Command Processor), BDOS (Basic Disk Operating System), and BIOS (Basic Input/output System).
In DOS, the 20-bit address space starts with the array of interrupt vectors and the system variables,
followed by the resident part of DOS and the application area and ending with memory block used by the
video card and BIOS.

Advantages of Monolithic Architecture

• Monolithic architecture has the following advantages, such as:


• Simple and easy to implement structure.
• Faster execution due to direct access to all the services
• The execution of the monolithic kernel is quite fast as the services such as memory management, file
management, process scheduling, etc., are implemented under the same address space.
• A process runs completely in a single address space in the monolithic kernel.
• The monolithic kernel is a static single binary file.
Disadvantages of Monolithic Architecture
• If any service fails in the monolithic kernel, it leads to the failure of the entire system.
• The entire operating system needs to be modified by the user to add any new service.
• The addition of new features or removal of obsolete features is very difficult.
• Security issues are always there because there is no isolation among various servers presenting the kernel.

Features of Monolithic System

Simple structure: This type of operating system has a simple structure. All the components needed for processing are embedded
into the kernel.
Works for smaller tasks: It works better for performing smaller tasks as it can handle limited resources.
Communication between components: All the components can directly communicate with each other and
also with the kernel.
Fast operating system: The code to make a monolithic kernel is very fast and robust

Difference between Monolithic Kernel and Microkernel

A kernel is the core part of an operating system, and it manages the system resources. A kernel is like a bridge between
the application and hardware of the computer. The kernel can be classified further into two categories, Microkernel and
Monolithic Kernel.
The microkernel is a type of kernel that allows customization of the operating system. It runs on privileged mode and
provides low-level address space management and Inter-Process Communication (IPC). Moreover, OS services such as
file system, virtual memory manager, and CPU scheduler are on top of the microkernel. Each service has its own
address space to make them secure. Besides, the applications also have their own address spaces. Therefore, there is
protection among applications, OS Services, and kernels.

• A monolithic kernel is another classification of the kernel. In monolithic kernel-based systems, each application
has its own address space. Like microkernel, this one also manages system resources between application and
hardware, but user services and kernel services are implemented under the same address space. It increases the
size of the kernel, thus increasing the size of the operating system as well.
• This kernel provides CPU scheduling, memory management, file management, and other system functions
through system calls. As both services are implemented under the same address space, this makes operating
Operating System (BCS401) Page 35
system execution faster

Fig 1.10(Microkernel and monolithic view)

Terms Monolithic Kernel Microkernel

Definition A monolithic kernel is a type of kernel in A microkernel is a kernel type that provides
operating systems where the entire operating low-level address space management, threads
system works in the kernel space. management, and interposes communication to
implement an operating system.
Address space In a monolithic kernel, both user services In microkernel user services and kernel,
and kernel services are kept in the same services are kept in separate address spaces.
address space.

Size The monolithic kernel is larger than the The microkernel is smaller in size.
microkernel.

Execution It has fast execution. It has slow execution.

OS services In a monolithic kernel system, the kernel In a microkernel-based system, the OS services
contains the OS services. and kernel are separated.
Extendible The monolithic kernel is quite The microkernel is easily extensible.
complicated to extend.
Security If a service crashes, then the whole system If a service crashes, it does not affect the
crashes in a monolithic kernel. working of the microkernel.

Customization It is difficult to add new functionalities to the It is easier to add new functionalities to the
monolithic kernel. Therefore, it is not microkernel. Therefore, it is more
customizable. customizable.
Code Less coding is required to write A microkernel requires more coding.
monolithic kernel. a

Example Linux, FreeBSD, OpenBSD, NetBSD, QNX, Symbian, L4Linux, Singularity, K42,
Microsoft Windows (95, 98, Me), Solaris, Mac OS X, Integrity, PikeOS, HURD, Minix,
HP-UX, DOS, OpenVMS, XTS- 400, etc. and Coyotos.

Operating System (BCS401) Page 36


Reentrant Kernel
Reentrant Kernel: A re-entrant kernel enables processes (or, to be more precise, their corresponding kernel threads) to
give away the CPU while in kernel mode. They do not hinder other processes from also entering kernel mode. In the
case of single processor systems multiple may be scheduled together an example of this case is a disk read. User
program issues a system call for a disk read; the scheduler will assign the CPU to another process (kernel thread) until
an interrupt from the disk controller indicates that the data is available and our thread can be resumed. This process can
still access I/O (which needs kernel functions), like user input. The system stays responsive and CPU time waste due to
IO wait is reduced. In a non reentrant kernel, the original function (whatever requested data) would be blocked until the
disk read was complete
A computer program or routine is described as reentrant if it can be safely called again before its previous invocation has
been completed (i.e. it can be safely executed concurrently). To be reentrant, a computer program or routine:
● Must hold no static (or global) non-constant data.
● Must not return the address to static (or global) non-constant data.
● Must work only on the data provided to it by the caller.
● Must not rely on locks to singleton resources. a variable that is referred to only once
● Must not modify its own code (unless executing in its own unique thread storage)
● Must not call non-reentrant computer programs or routines.

Fig:1.11(user mode and kernel mode )


Reentrant kernels are still able to execute non reentrant functions using locks to ensure only one process can execute
that non reentrant function.
Hardware interrupts are able to suspend the current process even if it is running in kernel mode (this enables
things like Ctrl+c to stop execution).
Kernel Control Path: Sequence of instructions executed by kernel to handle system call Normal execution will execute
instructions sequentially but certain actions will cause the CPU to interleave control paths.
Process in user mode invokes system call: Scheduler selects new process to run and causes a process switch. Two
control paths are executed on behalf of two different processes.
CPU detects exceptions: For example, accessing pages not present in RAM. Suspend the process that caused the
exception and start execution of a suitable procedure, page allocation. Once allocated original control path continues
Hardware interrupt: Hardware interrupts are higher priority processes.

Operating System (BCS401) Page 37


Dual Mode of Operation (used to implement Protection)

The dual mode operations in the operating system protect the operating system from illegal users We accomplish
this defense by designating some of the system instructions as privileged instructions that can cause harm The
hardware only allows for the execution of privileged instructions in kernel mode An example of a privileged
instruction is the command to switch to user mode Other examples include monitoring of I/O, controlling timers and
handling interruptions. To ensure proper operating system execution, we must differentiate between machine code
execution and user defined code Most computer systems have embraced offering hardware support that helps
distinguish between different execution modes We have two modes of the operating system user mode and kernel
mode bit is required to identify in which particular mode the current instruction is executing If the mode bit is 1 it
operates user mode, and if the mode bit is 0 it operates in kernel mode NOTE At the booting time of the system, it
always starts with the kernel mode.

Types of Dual Mode in Operating System

The operating system has two modes of operation to ensure it works correctly:

1. User Mode
2. Kernel Mode

User Mode (Non Privileged Mode):

When the computer system runs user applications like file creation or any other application program in the User Mode, this
mode does not have direct access to the computer's hardware. For performing hardware related tasks, like when the user
application requests for a service from the operating system or some interrupt occurs, in these cases, the system must switch
to Kernel Mode. The mode bit of the user mode is 1. This means that if the mode bit of the system's processor is 1, then the
system will be in the User Mode.

2. Kernel Mode (Privileged Mode):

All the bottom level tasks of the Operating system are performed in the Kernel Mode. As the Kernel space has direct
access to the hardware of the system, the kernel mode handles all the processes which require hardware support.
Apart from this, the main functionality of the Kernel.
Mode is to execute privileged instructions. These privileged instructions are not provided with user access, and
that's why these instructions cannot be processed in the User mode. So, all the processes and instructions that the
user is restricted to interfere with are executed in the Kernel Mode of the Operating System. The mode bit for the
Kernel Mode is 0. So, for the system to function in the Kernel Mode, the Mode bit of the processor must be.

Operating System (BCS401) Page 38


Fig:1.12 ( Kenel mode view)
What is the Need for Dual Mode Operations?

Certain types of processes are to be made hidden from the user, and certain tasks that do not require any type of
hardware support. Using the dual mode of the OS, these tasks can be dealt with separately. Also, the Operating
System needs to function in the dual mode because the Kernel Level programs perform all the bottom level functions
of the OS like process management, Memory management, etc. If the user alters these, then this can cause an entire
system failure. So, for specifying the access to the users only to the tasks of their use, Dual Mode is necessary for an
Operating system.
So, whenever the system works on the user applications, it is in the User mode. Whenever the user requests some
hardware services, a transition from User mode to Kernel mode occurs, and this is done by changing the mode bit
from 1 to 0. And for returning back into the User mode, the mode bit is again changed to 1.
User Mode and Kernel Mode Switching. In its lifespan, a process executes in user mode and kernel mode. The user
mode is a normal mode where the process has limited access. However, the kernel mode is the privileged mode
where the process has unrestricted access to system resources like hardware, memory, etc.
A process can access services like hardware I/O by executing accessing kernel data in kernel mode. Anything related
to process management, I/O hardware management, and memory management requires a process to execute in
Kernel mode.
This is important to know that a process in Kernel mode gets power to access any device and memory, and at the
same time any crash in kernel mode brings down the whole system. But any crash in user mode brings down the
faulty process only. The kernel provides System Call Interface (SCI), which are entry points for user processes to
enter kernel mode. System calls are the only way through which a process can go into kernel mode from user mode.
The below diagram explains user mode to kernel mode switching in detail.

Example of Dual Mode Operation

With the mode bit, we can distinguish between a task executed on behalf of the operating system and one executed on
behalf of the user. When the computer system executes on behalf of a user application, the system is in user mode
However, when a user application requests a service from the operating system via a system call, it must transition
from user to kernel mode to fulfill the request. As we can say, this architectural enhancement is useful for many other
aspects of system operation. At system boot time, the hardware starts in kernel mode. The operating system is then
loaded and starts user applications in user mode whenever a trap or interrupt occurs, the hardware switches from user
mode to kernel mode, changing the mode bit's state to Thus, whenever the operating system gains control of the
computer, it is in kernel mode. The system always switches to user mode by setting the mode bit to 1 before passing
control to a user program.

Operating System (BCS401) Page 39


Fig:1.13(Dual mode view)

System Calls in Operating System (OS)

A system call is a way for a user program to interface with the operating system. The program requests several
services, and the OS responds by invoking a series of system calls to satisfy the request. A system call can be written
in assembly language or a high-level language like C or Pascal. System calls are predefined functions that the
operating system may directly invoke if a high-level language is used. A system call is a method for a computer
program to request a service from the kernel of the operating system on which it is running. A system call is a
method of interacting with the operating system via programs. A system call is a request from computer software to
an operating system's kernel.
The Application Program Interface (API) connects the operating system's functions to user programs. It acts as a
link between the operating system and a process, allowing user-level programs to request operating system services.
The kernel system can only be accessed using system calls. System calls are required for any programs that use
resources.
How is system calls made?
When computer software needs to access the operating system's kernel, it makes a system call. The system call uses
an API to expose the operating system's services to user programs. It is the only method to access the kernel system.
All programs or processes that require resources for execution must use system calls, as they serve as an interface
between the operating system and user programs.
Below are some examples of how a system call varies from a user function.

• A system call function may create and use kernel processes to execute the asynchronous processing.
• A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege executes
in the kernel protection domain.
• System calls are not permitted to use shared libraries or any symbols that are not present in the kernel protection
domain.
• The code and data for system calls are stored in global kernel memory.

Why do you need system calls in the Operating System?

There are various situations where you must require system calls in the operating system. Following of the
situations are as follows:
• It is required when a file system wants to create or delete a file.
• Network connections require the system calls to send and receive data packets.
• If you want to read or write a file, you need to make system calls.
• If you want to access hardware devices, including a printer, scanner, you need a system call. System calls
are used to create and manage new processes.

Operating System (BCS401) Page 40


How System Calls Work

The Applications run in an area of memory known as user space. A system call connects to the operating
system's kernel, which executes in kernel space. When an application creates a system call, it must first obtain
permission from the kernel. It achieves this using an interrupt request, which pauses the current process and
transfers control to the kernel. If the request is permitted, the kernel performs the requested action, like creating
or deleting a file. As input, the application receives the kernel's output. The application resumes the procedure
after the input is received. When the operation is finished, the kernel returns the results to the application and
then moves data from kernel space to user space in memory.
A simple system call may take a few nanoseconds to provide the result, like retrieving the system date and
time. A more complicated system call, such as connecting to a network device, may take a few seconds. Most
operating systems launch a distinct kernel thread for each system call to avoid bottlenecks. Modern operating
systems are multi-threaded, which means they can handle various system calls at the same time.
Types of System Calls Process Control
Process control is the system call that is used to direct the processes. Some process control
examples include
creating, load, abort, and end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management examples include
creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of device management
include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some examples of
information maintenance, including getting system data, set time or date, get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples of communication,
including creates, delete communication connections, send, receive messages, etc.

Examples of Windows and UNIX system calls.

Process Windows Unix

Process Control CreateProcess() Fork()


ExitProcess() Exit()
WaitForSingleObject() Wait()

File Manipulation CreateFile() Ope


ReadFile() n()
WriteFile() Rea
CloseHandle() d()
Wri
te()
Clo
se()

Operating System (BCS401) Page 41


Difference between User Mode and Kernel Mode

Terms User Mode Kernel Mode

Definition User Mode is a restricted mode, which the Kernel Mode is the privileged mode,
application programs are executing and starts. which the computer enters when
accessing hardware resources.

Modes User Mode is considered as the slave mode or the Kernel mode is the system mode, master
restricted mode. mode or the privileged mode.

Address Space In User mode, a process gets its own address space. In Kernel Mode, processes get a single
address space.

Interruptions In User Mode, if an interrupt occurs, only one In Kernel Mode, if an interrupt occurs,
process fails. the whole operating system might fail.

Restrictions In user mode, there are restrictions to access kernel In kernel mode, both user programs and
programs. Cannot access them directly. kernel programs can access.

Operating System (BCS401) Page 42


Operating System(KCS401)

Unit-1 Practice Question

Q No. Question (S)

1. Define the operating system and mention its major functions.

2. Explain in detail about the operating system services.

3. Enumerate various operating system components with their functions in brief.

4. Explain the following terms clearly:


●Spooling
●Multiprogramming
●Multiprocessing
●Multithreading

5. Differentiate between the followings:


●Interactive and Batch Processing System
●Time-Sharing and Real-Time System
●Multiprogramming and Multitasking-System
●Multiuser Systems and Multithreaded Systems

6. What is the real-time operating system? What is the difference between hard real-time and
soft real-time operating systems?

7. Describe the differences between symmetric and asymmetric multiprocessing.

8. What do you understand by system call? Enumerate five system calls used for
process management.

Operating System (BCS401) Page 43


9. Explain Kernel. Also, explain in detail about the Monolithic and Microkernel Systems.

Unit-2:(Concurrent Processes)
Syllabus:
Concurrent Processes: Process Concept, Principle of Concurrency, Producer / Consumer
Problem, Mutual Exclusion, Critical Section Problem, Dekker’s solution, Peterson’s solution,
Semaphores, Test and Set operation; Classical Problem in Concurrency- Dining Philosopher
Problem, Sleeping Barber Problem; Inter Process Communication models and Schemes, Process
generation.

What are Concurrent Processes?


LECTURE 7
Process
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the
system.
To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a process
inside main memory −

Operating System (BCS401) Page 44


S.No Component & Description

Stack
1 The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.

Text
3 This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.

4 Data
This section contains the global and static variables.
Process Life Cycle
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.

S.No State & Description

1 Start
This is the initial state when a process is first started/created.

Operating System (BCS401) Page 45


Ready
The process is waiting to be assigned to a processor. Ready processes are
2 waiting to have the processor allocated to them by the operating system so that
they can run. Process may come into this state after Start state or while running
it by but interrupted by the scheduler to assign CPU to some other process.

Running
3 Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.

Waiting
4 Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.

Terminated or Exit
5 Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from
main memory.

Fig:2.1(Process life cycle)


Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process as listed below in the table −

S.N. Information & Description

Process State
1 The current state of the process i.e., whether it is ready, running, waiting, or
whatever.

Operating System (BCS401) Page 46


2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

Program Counter
5 Program Counter is a pointer to the address of the next instruction to be
executed for this process.

CPU registers
6 Various CPU registers where process need to be stored for execution for
running state.

CPU Scheduling Information


7 Process priority and other scheduling information which is required to schedule
the process.

Memory management information


8 This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.

Accounting information
9 This includes the amount of CPU used for process execution, time limits,
execution ID etc.

10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −

Operating System (BCS401) Page 47


Fig:2.2(Process Control Block)
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
CPU Switches From Process To Process

Operating System (BCS401) Page 48


Lecture:8:

Fig:2.3
Operations on Processes

There are many operations that can be performed on processes. Some of these are process
creation, process preemption, process blocking, and process termination. These are given in
detail as follows −

Process Creation

Processes need to be created in the system for different operations. This can be done by the
following events −

● User request for process creation


● System initialization
● Execution of a process creation system call by a running process
● Batch job initialization

Operating System (BCS401) Page 49


A process may be created by another process using fork(). The creating process is called the
parent process and the created process is the child process. A child process can have only one
parent but a parent process may have many children. Both the parent and child processes have
the same memory image, open files, and environment strings. However, they have distinct
address spaces.

A diagram that demonstrates process creation using fork() is as follows −

Fig:2.4(process creation using fork())


Process Termination

Whenever the process finishes executing its final statement and asks the operating system to
delete it by using exit() system call.

At that point of time the process may return the status value to its parent process with the help of
wait() system call.

All the resources of the process including physical and virtual memory, open files, I/O buffers
are deallocated by the operating system.

Reasons for process termination

The reasons that the process may terminate the execution of one of its children are as follows −

● The child exceeds its usage of resources that it has been allocated.
● The task that is assigned to the child is no longer required.
Operating System (BCS401) Page 50
● The parent is exiting and the operating system does not allow a child to continue if its
parent terminates.

Some systems, including VMS, do not allow a child to exist if its parent has terminated. In such
systems, if a process terminates either normally or abnormally, then all its children have to be
terminated. This concept is referred to as cascading termination.

Inter-process Communication:
Processes executing concurrently in the operating system may be either independent processes or
cooperating processes. A process is independent if it cannot affect or be affected by the other
processes executing in the system. Any process that does not share data with any other process is
independent. A process is cooperating if it can affect or be affected by the other processes
executing in the system. So, any process that shares data with other processes is a cooperating
process.
There are several reasons for providing an environment that allows process cooperation:
∙ Information sharing
∙ Computation speedup
∙ Modularity
∙ Convenience
Cooperating processes require an inter-process communication (IPC) mechanism that will allow
them to exchange data and information. These are some fundamental models of inter-process
communication:
1) shared memory
2) message passing
3) Naming
4) Synchronization
5) Buffering
Shared Memory:
In the shared-memory model, a region of memory that is shared by cooperating processes is
established. A shared-memory region resides in the address space of the process creating the
shared-memory segment. Other processes that wish to communicate using this shared-memory
segment must attach it to their address space. Processes can then exchange information by
reading and writing data to the shared region. The form of the data and the location are

Operating System (BCS401) Page 51


determined by these processes and are not under the operating system's control. The processes
are also responsible for ensuring that they are not writing to the same location simultaneously.
Shared memory is faster than message passing, in shared-memory systems, system calls are
required only to establish shared-memory regions. Once shared memory is established, all
accesses are treated as routine memory accesses, and no assistance from the kernel is required.
Shared memory allows maximum speed and convenience of communication.

Message passing:
In the message-passing model, communication takes place by means of messages exchanged
between the cooperating processes. Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the same address space and is
particularly useful in a distributed environment, where the communicating processes may reside
on different computers connected by a network. For example, a chat program.
Message passing is slower than Shared memory, as message-passing systems are typically
implemented using system calls and thus require the more time-consuming task of kernel
intervention.
Message passing is useful for exchanging smaller amounts of data and is also easier to
implement than shared memory.
The actual function of message-passing is normally provided in the form of a pair of primitives:

Operating System (BCS401) Page 52


∙ Send(message)
∙ Receive(message)
If processes P and Q want to communicate, they must send messages to and receive messages
from each other; a communication link must exist between them. Here are several methods for
logically implementing a link and the send () / receive () operations:
∙ Direct or indirect communication
∙ Synchronous or asynchronous communication
∙ Automatic or explicit buffering
Addressing (Naming):
Processes that want to communicate must have a way to refer to each other. The various schemes
for specifying processes in send and receive primitives are of two types:
1. Direct communication
2. Indirect communication
Direct Communication: In direct communication, each process that wants to communicate
must explicitly name the recipient or sender of the communication. In this scheme, the send()
and receive() primitives are defined as:
∙ send (P, message) - Send a message to process P.
∙ receive (Q, message)- Receive a message from process Q.
A communication link in this scheme has the following properties:
∙ A link is established automatically between every pair of processes that want to communicate.
The processes need to know only each other's identity to communicate.
∙ A link is associated with exactly two processes.
∙ Between each pair of processes, there exists exactly one link.
This scheme exhibits symmetry in addressing, that is, both the sender process and the receiver
process must name the other to communicate.
A variant of this scheme employs asymmetry in addressing. Here, only the sender names the
recipient; the recipient is not required to name the sender. In this scheme, the send() and
receive() primitives are defined as follows:
∙ send(P, message) - Send a message to process P.

Operating System (BCS401) Page 53


∙ receive (id, message) -Receive a message from any process; the variable id is set to the name
of the process with which communication has taken place.
Indirect Communication: In indirect communication, the messages are sent to and received
from mailboxes, or ports. Each mailbox has a unique identification. Two processes can
communicate only if the processes have a shared mailbox. The send() and receive() primitives
are defined as follows:
∙ send (A, message) -Send a message to mailbox A.
∙ receive (A, message)-Receive a message from mailbox A.
In this scheme, a communication link has the following properties:
∙ A link is established between a pair of processes only if both members of the pair have a shared
mailbox.
∙ A link may be associated with more than two processes.
∙ Between each pair of communicating processes, there may be a number of different links, with
each link corresponding to one mailbox.
A mailbox may be owned either by a process or by the operating system. When a process that
owns a mailbox terminates, the mailbox disappears. If mailbox is owned by the operating
system, then it must provide a mechanism that allows a process to: Create a new mailbox, Send
and receive messages through the mailbox, Delete a mailbox.
Synchronization:
Communication between processes takes place through calls to send () and receive () primitives.
Message passing may be either blocking or non-blocking also known as synchronous and
asynchronous.
∙ Blocking send - The sending process is blocked until the message is received by the receiving
process or by the mailbox.
∙ Non-blocking send - The sending process sends the message and resumes operation.
∙ Blocking receive - The receiver blocks until a message is available.
∙ Non-blocking receive - The receiver retrieves either a valid message or a null.
Different combinations of send () and receive () are possible. When both send () and receive ()
are blocking, we have a rendezvous between the sender and the receiver. This combination
allows for tight synchronization between processes.
Buffering:
Whether communication is direct or indirect, messages exchanged by communicating processes
reside in a temporary queue. Basically, such queues can be implemented in three ways:
Operating System (BCS401) Page 54
∙ Zero capacity - The queue has a maximum length of zero; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.
∙ Bounded capacity - The queue has finite length. If the queue is not full when a new message
is sent, the message is placed in the queue and the sender can continue execution without
waiting. The link's capacity is finite, however. If the link is full, the sender must block until
space is available in the queue.
∙ Unbounded capacity - The queue's length is potentially infinite; thus, any number of
messages can wait in it. The sender never blocks.

Lecture:9

Principles of concurrency:
A cooperating process is one that can affect or be affected by other processes executing in the
system. Cooperating processes can either directly share a logical address space (that is, both code
and data) or be allowed to share data only through files or messages.
Concurrent access to shared data may result in data inconsistency. To achieve the consistency of
shared data we need to understand the principles of concurrency which are given below:
Race Condition:
1∙ A race condition occurs when multiple processes or threads read and write data items so that
the final result depends on the order of execution of instructions in the multiple processes. Let us
consider two simple examples:
− Suppose that two processes, P1 and P2, share the global variable X. At some point in
its execution, P1 updates X to the value 1, and at some point in its execution, P2 updates X to the
value 2. Thus, the two tasks are in a race to write variable X. In this example the “loser” of the
race (the process that updates last) determines the final value of X.
− Consider two process, P3 and P4, that share global variables b and c, with initial values
b = 1 and c = 2. At some point in its execution, P3 executes the assignment b = b + c, and at
some point in its execution, P4 executes the assignment c = b + c. Note that the two processes
update different variables. However, the final values of the two variables depend on the order in
which the two processes execute these two assignments. If P3 executes its assignment statement
first, then the final values are b = 3 and c = 5. If P4 executes its assignment statement first, then
the final values are b = 4 and c = 3.
Operating System Concerns:
Design and management issues for concurrency are as follows:

Operating System (BCS401) Page 55


1. The OS must be able to keep track of the various processes. This is done with the use of
process control blocks.
2. The OS must allocate and de-allocate various resources for each active process.
3. The OS must protect the data and physical resources of each process against unintended
interference by other processes.
4. The functioning of a process, and the output it produces, must be independent of the speed at
which its execution is carried out relative to the speed of other concurrent processes.
Process Interaction:
We can classify the ways in which processes interact on the basis of the degree to which they
are aware of each other’s existence. Following are the degrees of awareness of process:
1. Processes unaware of each other
2. Processes indirectly aware of each other
3. Processes directly aware of each other

Requirements for Mutual Exclusion:


Mutual exclusion should meet the following requirements:
Mutual exclusion must be enforced: Only one process at a time is allowed into its critical section,
among all processes that have critical sections for the same resource or shared object.
Operating System (BCS401) Page 56
∙ A process that halts in its noncritical section must do so without interfering with other
processes.
∙ It must not be possible for a process requiring access to a critical section to be delayed
indefinitely: no deadlock or starvation.
∙ When no process is in a critical section, any process that requests entry to its critical section
must be permitted to enter without delay.
∙ No assumptions are made about relative process speeds or number of processors.
∙ A process remains inside its critical section for a finite time only.
The Critical-Section Problem:
Consider a system consisting of n processes {Po, P1 , ... , Pn - 1}. Each process has a segment of
code, called a critical section in which the process may be changing common variables, updating
a table, writing a file, and so on. “When one process is executing in its critical section, no other
process is to be allowed to execute in its critical section. That is, no two processes are executing
in their critical sections at the same time.”
In Critical-section problem:
∙ Each process must request permission to enter its critical section.
∙ The section of code implementing this request is the entry section.
∙ The critical section may be followed by an exit section.
∙ The remaining code is the remainder section.
The general structure of a typical process Pi is shown as:

Operating System (BCS401) Page 57


A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion: If process Pi is executing in its critical section, then no other processes can
be executing in their critical sections.
2. Progress: If no process is executing in its critical section and some processes wish to enter
their critical sections, then only those processes that are not executing in their remainder sections
can participate in deciding which will enter its critical section next, and this selection cannot be
postponed indefinitely.
3. Bounded waiting: There exists a bound, or limit, on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted.
Lecture:10
Bakery Algorithm
Lamport proposed a bakery algorithm, a software solution, for the n process mutual exclusion
problem. This algorithm solves a critical problem, following the fairest, first come, first serve
principle.

This algorithm is known as the bakery algorithm as this type of scheduling is adopted in bakeries
where token numbers are issued to set the order of customers. When a customer enters a bakery
store, he gets a unique token number on its entry. The global counter displays the number of
customers currently being served, and all other customers must wait at that time. Once the baker
finishes serving the current customer, the next number is displayed. The customer with the next
token is now being served.

Similarly, in Lamport's bakery algorithm, processes are treated as customers. In this, each
process waiting to enter its critical section gets a token number, and the process with the lowest
number enters the critical section. If two processes have the same token number, the process with
a lower process ID enters its critical section.

Algorithm of Lamport's bakery algorithm


do
{
entering[i] := true; // show interest in critical section
// get a token number
number[i] := 1 + max(number[0], number[1], ..., number[n - 1]);
entering [i] := false;
for ( j := 0 ; j<n; j++)
{
// busy wait until process Pj receives its token number
Operating System (BCS401) Page 58
while (entering [j])
{ ; } // do nothing
while (number[j] != 0 && (number[j], j) < (number[i], i)) // token comparison
{ ; } // do nothing
}
// critical section
number[i] := 0; // Exit section
// remainder section
} while(1);

Structure of process Pi for Lamport's bakery algorithm to critical section problem.

Explanation –

This algorithm uses the following two Boolean variables.

1. boolean entering[n];
2. int number[n];

All entering variables are initialized to false, and n integer variables numbers are all initialized to
0. The value of integer variables is used to form token numbers.

When a process wishes to enter a critical section, it chooses a greater token number than any
earlier number.

Consider a process Pi wishes to enter a critical section, it sets

entering[i] to true to make other processes aware that it is choosing a token number. It then
chooses a token number greater than those held by other processes and writes its token number.
Then it sets entering[i] to false after reading them. Then It enters a loop to evaluate the status of
other processes. It waits until some other process Pj is choosing its token number.

Pi then waits until all processes with smaller token numbers or the same token number but with
higher priority are served fast.

When the process has finished with its critical section execution, it resets its number variable to
0.

The Bakery algorithm meets all the requirements of the critical section problem.

Lecture:11
What is Mutual Exclusion?

Operating System (BCS401) Page 59


Mutual Exclusion is a property of process synchronization that states that “no two processes can
exist in the critical section at any given point of time”. The term was first coined by Dijkstra.
Any process synchronization technique being used must satisfy the property of mutual exclusion,
without which it would not be possible to get rid of a race condition.
The need for mutual exclusion comes with concurrency. There are several kinds of concurrent
execution:
1. Interrupt handlers
2. Interleaved, preemptively scheduled processes/threads
3. Multiprocessor clusters, with shared memory
4. Distributed systems
Mutual exclusion methods are used in concurrent programming to avoid the simultaneous use of
a common resource, such as a global variable, by pieces of computer code called critical
sections.
The requirement of mutual exclusion is that when process P1 is accessing a shared resource R1,
another process should not be able to access resource R1 until process P1 has finished its
operation with resource R1.
Examples of such resources include files, I/O devices such as printers, and shared data structures.

Software Based Solution to Critical Section Problem:


1. Peterson’s Solution: Peterson's solution is restricted to two processes that alternate execution
between their critical sections and remainder sections. The processes are numbered P0 and P1.
For convenience, when presenting Pi, we use Pj to denote the other process; that is, j equals 1 - i.
Peterson's solution requires the two processes to share two data items:
int turn;
boolean flag[2];
The variable turn indicates whose turn it is to enter its critical section. That is, if turn == i, then
process Pi is allowed to execute in its critical section.
The flag array is used to indicate if a process is ready to enter its critical section. For example, if
flag [i] is true, this value indicates that Pi is ready to enter its critical section.
To enter the critical section, process Pi first sets flag [i] to be true and then sets turn to the value
j, thereby asserting that if the other process wishes to enter the critical section, it can do so. The
eventual value of turn determines which of the two processes is allowed to enter its critical
section first.

Operating System (BCS401) Page 60


In this solution following conditions are fulfilled:
∙ Mutual exclusion is preserved.
∙ The progress requirement is satisfied.
∙ The bounded-waiting requirement is met.
To prove property 1, we note that each P; enters its critical section only if either flag [j] == false
or turn == i. Also note that, if both processes can be executing in their critical sections at the
same time, then flag [0] == flag [1] == true. These two observations imply that Po and P1 could
not have successfully executed their while statements at about the same time, since the value of
turn can be either 0 or 1 but cannot be both. Hence, one of the processes ---say, Pi---must have
successfully executed the while statement, whereas P; had to execute at least one additional
statement ("turn== j"). However, at that time, flag [j] == true and turn == j, and this condition
will persist as long as Pi is in its critical section; as a result, mutual exclusion is preserved. To
prove properties 2 and 3, we note that a process P; can be prevented from entering the critical
section only if it is stuck in the while loop with the condition flag [j] ==true and turn=== j; this
loop is the only one possible. If Pi is not ready to enter the critical section, then flag [j] ==false,
and P; can enter its critical section. If Pj has set flag [j] to true and is also executing in its while
statement, then either turn === i or turn === j. If turn == i, then P; will enter the critical section.
If turn== j, then Pi will enter the critical section. However, once Pi exits its critical section, it
will reset flag [j] to false, allowing P; to enter its critical section. If Pi resets flag [j] to true, it
must also set turn to i. Thus, since P; does not change the value of the variable turn while
executing the while statement, P; will enter the critical section (progress) after at most one entry
by P1 (bounded waiting).
Lecture:12
2.What is Dekker’s Algorithm?

Operating System (BCS401) Page 61


It is a simple and efficient algorithm that allows only one process to access a shared resource at a
time. The algorithm achieves mutual exclusion by using two flags that indicate each process's
intent to enter the critical section. By alternating the flags' use and checking if the other process's
flag is set, the algorithm ensures that only one process enters the critical section at a time.
Algorithm

The algorithm uses flags to indicate the intention of each process to enter a critical section, and a
turn variable to determine which process is allowed to enter the critical section first.

1st Attempt

A process wishing to execute its critical section first examines the contents of turn (a global
memory location). If the value of turn is equal to the number of the process, then the process may
proceed to its critical section. Otherwise, it is forced to wait. Waiting process repeatedly reads
the value of turn until it is allowed to enter its critical section. This procedure is known as busy
waiting or spin waiting, because the waiting process do nothing productive and consumes
processor time while waiting for its chance.

This solution guarantees the mutual exclusion property but has drawbacks:

− If one process fails, the other process is permanently blocked. This is true whether a process
fails in its critical section or outside of it.

2nd Attempt

The flaw in the first attempt is that it stores the name of the process that may enter its critical
section and if one process fails, the other process is permanently blocked. To overcome this
problem a Boolean vector flag is defined.

If one process wants to enter its critical section it first checks the other process flag until that flag
has the value false, indicating that the other process is not in its critical section. The checking

Operating System (BCS401) Page 62


process immediately sets its own flag to true and proceeds to its critical section. When it leaves
its critical section it sets its flag to false.

In this solution if one process fails outside the critical section including the flag setting code then
the other process is not blocked because in this condition flag of the other process is always
false. However, this solution has two drawbacks:
− If one process fails inside its critical section or after setting its flag to true just before entering
its critical section, then the other process is permanently blocked.
− It does not guarantee mutual exclusion.
3rd Attempt
Because a process can change its state after the other process has checked it before the other
process can enter its critical section, the second attempt failed. Perhaps we can fix this problem
with a simple interchange of two statements as:

This solution guarantees mutual exclusion for example consider, if P0 sets flag [0] to true, P1
cannot enter its critical section. If P1 already in its critical section P0 sets its flag then P0 will be
blocked by the while statement.
Problem:

Operating System (BCS401) Page 63


− If both process set their flag to true before while statement, then each will think that other one
is in its critical section, causing deadlock.
4th Attempt
In the third attempt, a process sets its state without knowing the state of the other process. We
can fix this in a way that makes each process more deferential: each process sets its flag to
indicate its desire to enter its critical section but is prepared to reset the flag to defer to the other
process.

This solution guarantees mutual exclusion but is still flawed. Consider the following sequence of
events: P0 sets flag [0] to true. P1 sets flag [1] to true. P0 checks flag [1]. P1 checks flag [0]. P0
sets flag [0] to false. P1 sets flag [1] to false. P0 sets flag [0] to true. P1 sets flag [1] to true.
This sequence could be extended indefinitely, and neither process could enter its critical section.
A Correct Solution

Operating System (BCS401) Page 64


When P0 wants to enter its critical section, it sets its flag to true. It then checks the flag of P1. If
that is false, P0 may immediately enter its critical section. Otherwise, P0 consults turn. If it finds
that turn = 0, then it knows that it is its turn to insist and periodically checks P1’s flag. P1 will at
some point note that it is its turn to defer and set its flag to false, allowing P0 to proceed. After
P0 has used its critical section, it sets its flag to false to free the critical section and sets turn to 1
to transfer the right to insist to P1.
Lecture:13
Semaphores:
Controlling synchronization by using an abstract data type, called a semaphore, was proposed by
Dijkstra in 1965.
∙ Semaphores are easily implemented in OS and provide a general purpose solution to
controlling access to critical section.
∙ A semaphore is an integer value used for signaling among processes.
∙ A semaphore (S) is an integer variable that can perform following atomic operations:
1) Initialization
2) Decrement
3) Increment
Decrement results in the blocking of a process and increment results in the unlocking of a
process.

Operating System (BCS401) Page 65


∙ Apart from initialization a semaphore (S) is accessed only through two standard atomic
operations:
1) Wait() or Down() 🡪 P (from the Dutch proberen, “To Test”)
2) Signal() or Up() 🡪 V (from verhogen, “To Increment” )

The integer value of the semaphore in the wait () and signal () operations must be executed
indivisibly. That is, when one process modifies the semaphore value, no other process can
simultaneously modify that same semaphore value.
This situation is a critical section problem and can be resolved in either of two ways:
1) By using Counting Semaphore
2) By using Binary Semaphore
1) Counting Semaphore
∙ The value of a counting semaphore can range over an unrestricted domain.
∙ It is also known as general semaphore.
Counting semaphores can be used to control access to a given resource consisting of a finite
number of instances. The semaphore is initialized to the number of resources available. Each
process that wishes to use a resource performs a wait() operation on the semaphore (thereby
decrementing the count). When a process releases a resource, it performs a signal() operation
(incrementing the count). When the count for the semaphore goes to 0, all resources are being
used. After that, processes that wish to use a resource will block until the count becomes greater
than 0.
2)Binary Semaphore
∙ The value of a binary semaphore can range only between 0 and 1.
∙ binary semaphores are known as mutex locks, as they are locks that provide mutual exclusion.
∙ In this, queue is used to hold the processes waiting on the semaphore.

Operating System (BCS401) Page 66


∙ FIFO, policy is used to remove the processes from the queue.
∙ binary semaphores are used to deal with the critical-section problem for multiple processes. The
n processes share a semaphore, mutex, initialized to 1.

Mutex
Mutex is a specific kind of binary semaphore that is used to provide a locking mechanism. It
stands for Mutual Exclusion Object. Mutex is mainly used to provide mutual exclusion to a
specific portion of the code so that the process can execute and work with a particular section of
the code at a particular time.
Mutex uses a priority inheritance mechanism to avoid priority inversion issues. The priority
inheritance mechanism keeps higher-priority processes in the blocked state for the minimum
possible time. However, this cannot avoid the priority inversion problem, but it can reduce its
effect up to an extent.

Advantages of Mutex
● No race condition arises, as only one process is in the critical section at a time.
● Data remains consistent and it helps in maintaining integrity.
● It’s a simple locking mechanism that can be obtained by a process before entering into a
critical section and released while leaving the critical section.
Disadvantages of Mutex
● If after entering into the critical section, the thread sleeps or gets preempted by a high-
priority process, no other thread can enter into the critical section. This can lead to starvation.
● When the previous thread leaves the critical section, then only other processes can enter into
it, there is no other mechanism to lock or unlock the critical section.
● Implementation of mutex can lead to busy waiting that leads to the wastage of the CPU
cycle.

Lecture:14
Operating System (BCS401) Page 67
Producer-Consumer Problem:

The Producer-Consumer problem is a classical multi-process synchronization problem, that is we


are trying to achieve synchronization between more than one process.

There is one Producer in the producer-consumer problem, Producer is producing some items,
whereas there is one Consumer that is consuming the items produced by the Producer. The same
memory buffer is shared by both producers and consumers which is of fixed-size.

The task of the Producer is to produce the item, put it into the memory buffer, and again start
producing items. Whereas the task of the Consumer is to consume the item from the memory
buffer.

Below are a few points that considered as the problems occur in Producer-Consumer:

o The producer should produce data only when the buffer is not full. In case it is found that
the buffer is full, the producer is not allowed to store any data into the memory buffer.
o Data can only be consumed by the consumer if and only if the memory buffer is not
empty. In case it is found that the buffer is empty, the consumer is not allowed to use any
data from the memory buffer.
o Accessing memory buffer should not be allowed to producer and consumer at the same
time.

To illustrate the concept of cooperating processes, let's consider the producer-consumer


problem, which is a common paradigm for cooperating processes. A producer process produces
information that is consumed by a consumer process.
One solution to the producer-consumer problem uses shared memory. To allow producer and
consumer processes to run concurrently, we must have available a buffer of items that can be
filled by the producer and emptied by the consumer. The producer and consumer must be
synchronized, so that the consumer does not try to consume an item that has not yet been
produced. To solve this problem let’s take the bounded buffer. The bounded buffer assumes a
fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer
must wait if the buffer is full.
The following variables reside in a region of memory shared by the producer and consumer
processes:

Operating System (BCS401) Page 68


The shared buffer is implemented as a circular array with two logical pointers: in and out. The
variable in points to the next free position in the buffer and the variable out points to the first full
position in the buffer. The buffer is empty when in = = out and the buffer is full when ((in+ 1) %
BUFFER_SIZE) = = out.

This scheme allows at most BUFFER_SIZE - 1 items in the buffer at the same time.
To overcome the limitation of BUFFER_SIZE – 1, we add an integer variable counter, which is
initialized to 0. Counter is incremented every time we add a new item to the buffer and is
decremented every time we remove one item from the buffer. The code for the producer process
can be modified as follows:

Operating System (BCS401) Page 69


Lecture:15
TestAndSet() Instruction
∙ Test-and-set instructions are offered by RAM.
∙ Test-and-set instructions are used to write to a memory location and return its old value as a
single atomic (non-interruptible) operation.
∙ The important characteristic of this instruction is that it is executed atomically.
Thus, if two TestAndSet () instructions are executed simultaneously (each on a different CPU),
they will be executed sequentially in some arbitrary order. If the machine supports the
TestAndSet () instruction, then we can implement mutual exclusion by declaring a Boolean
variable lock, initialized to false.
A lock can be built using a TestAndSet() instruction as follows:

Operating System (BCS401) Page 70


Problem:
No bounded waiting.
Lecture:16
Classical Problems of Synchronization:
The Dining-Philosophers problem:
∙ Five philosophers sit around a circular table. Each philosopher spends his life alternatively
thinking and eating. In the center of the table is a large plate of food. The philosophers can only
afford five chopsticks. One chopstick is placed between each pair of philosophers and they agree
that each will only use the chopstick to his immediate right and left.
∙ The problem is to design a set of processes for philosophers such that each philosopher can eat
periodically and none dies for hunger. A philosopher to the left or right of a dining philosophers
cannot eat while the dining philosopher is eating, since forks are shared resources.

Operating System (BCS401) Page 71


Fig:1.4
Solution to this problem using semaphore can be implemented as:
Each philosopher picks up his right chopstick before he tried to pick up his left chopstick. A
philosopher tries to grab a chopstick by executing a wait () operation on that semaphore; he
releases his chopstick by executing the signal operation on the appropriate semaphore.
/* Shared Data */
Semaphore chopstick[5]; // All the elements of chopstick are initialized to 1.
/* Structure of philosopher i */

Although this solution guarantees that no two neighbours are eating simultaneously, but it could
create a deadlock.

Operating System (BCS401) Page 72


Suppose that all five philosophers become hungry simultaneously and each grabs her left
chopstick. All the elements of chopstick will now be equal to 0. When each philosopher tries to
grab her right chopstick, she will be delayed forever.
Some possible solutions to overcome the deadlock problem are:
1. Allow at most four philosophers to be sitting simultaneously at the table.
2. Allow a philosopher to pick up his chopsticks only if both chopsticks are available.
3. Use an asymmetric solution; that is, an odd philosopher picks up first his left chopstick and
then his right chopstick, whereas an even philosopher picks up his right chopstick and then his
left chopstick.
Lecture:17
Sleeping Barber Problem:
Consider a barber’s shop where there is only one barber, one barber chair and a number of
waiting chairs for the customers. When there are no customers the barber sits on the barber chair
and sleeps. When a customer arrives he awakes the barber or waits in one of the vacant chairs if
the barber is cutting someone else’s hair. When all the chairs are full, the newly arrived customer
simply leaves.
Problems
∙ There might be a scenario in which the customer ends up waiting on a barber and a barber
waiting on the customer, which would result to a deadlock.
∙ Then there might also be a case of starvation when the customers don’t follow an order to cut
hair, as some people won’t get a haircut even though they had been waiting long.
The solution to these problems involves the use of three semaphores out of which one is a mutex
(binary semaphore). They are:
∙ Customers: Helps count the waiting customers.
∙ Barber: To check the status of the barber, if he is idle or not.
∙ accessSeats: A mutex which allows the customers to get exclusive access to the number of free
seats and allows them to increase or decrease the number.
∙ NumberOfFreeSeats: To keep the count of the available seats, so that the customer can either
decide to wait if there is a seat free or leave if there are none.
The Procedure
∙ When the barber first comes to the shop, he looks out for any customers i.e. calls P(Customers),
if there are none he goes to sleep.

Operating System (BCS401) Page 73


∙ Now when a customer arrives, the customer tries to get access to the accessSeats mutex i.e. he
calls P(accessSeats), thereby setting a lock.
∙ If no free seat (barber chair and waiting chairs) is available he releases the lock i.e. does a
V(accessSeats) and leaves the shop.
∙ If there is a free seat he first decreases the number of free seats by one and he calls
V(Customers) to notify the barber that he wants to cut.
∙ Then the customer releases the lock on the accessSeats mutex by calling V(accessSeats).
∙ Meanwhile when V(Customers) was called the barber awakes.
∙ The barber locks the accessSeats mutex as he wants to increase the number of free seats
available, as the just arrived customer may sit on the barber’s chair if that is free.
∙ Now the barber releases the lock on the accessSeats mutex so that other customers can access it
to the see the status of the free seats.
∙ The barber now calls a V(Barber), i.e. he tells the customer that he is available to cut.
∙ The customer now calls a P(Barber), i.e. he tries to get exclusive access of the barber to cut his
hair.
∙ The customer gets a haircut from the barber and as soon as he is done, the barber goes back to
sleep if there are no customers or waits for the next customer to alert him.
∙ When another customer arrives, he repeats the above procedure again.
∙ If the barber is busy then the customer decides to wait on one of the free waiting seats.
∙ If there are no customers, then the barber goes back to sleep.
Implementation
The following pseudo-code guarantees synchronization between barber and customer and is
deadlock free, but
may lead to starvation
of a customer.

Operating System (BCS401) Page 74


Practice Questions:
1. What do you understand about the Critical Section?
2. What do you mean by Concurrent Processes? Explain in detail about the Mutual
Exclusion and Critical Section Problem. Explain in detail about the Dining Philosopher
Problem.
3. Explain in detail about the Inter Process Communication models and Schemes. What is a
Critical Section problem? Give the conditions that a solution to the critical section
problem must satisfy.
4. Explain what semaphores are, their usage, implementation given to avoid busy waiting
and binary semaphores.
5. What is the Producer Consumer problem? How can it illustrate the classical problem of
synchronization? Explain.
Operating System (BCS401) Page 75
6. What is the use of IPC?
7. Discuss mutual exclusion using test and set() instruction.)

8. Give the principles, mutual exclusion in critical section problem. Also discuss how well
these principles are followed in Dekker’s solution.

Unit- 3 CPU Scheduling


CPU Scheduling: Scheduling Concepts, Performance Criteria, Process States, Process Transition
Diagram, Schedulers, Process Control Block (PCB), Process address space, Process identification
information, Threads and their management, Scheduling Algorithms, Multiprocessor Scheduling.
Deadlock: System model, Deadlock characterization, Prevention, Avoidance and detection, Recovery
from deadlock

LECTURE 18

Process Concept:

What is a Process:

● A process is an executing program, including the current values of the program


counter, registers, and variables.
● Difference between a process and a program is that the program is a group of
instructions whereas the process is the activity.
● Process can be described:
❖ I/O Bound Process- spends more time doing I/O then computation.
❖ CPU Bound Process- spends more time doing computation.

Process States:

Fig:3.1

Operating System (BCS401) Page 76


⮚ Start : The process has just arrived.
⮚ Ready : The process is waiting to grab the processor.
⮚ Running : The process has been allocated by the processor.
⮚ Waiting : The process is doing I/O work or blocked.

⮚ Halted : The process has finished and is about to leave the system.
⮚ LECTURE 19

What is a Process/Task Control Block (PCB)?


In the OS, each process is represented by its PCB (Process Control Block). The PCB,
generally contains the following information:
• Process State: The state may be new, ready, running, and waiting, halted, and so on.
• Process ID
• Program Counter (PC) value: The counter indicates the address of the next instruction
to be executed or this process.

• Register values: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program counter,
this state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward.

• Memory Management Information (page tables, base/bound registers etc.):


• Processor Scheduling Information ( priority, last processor burst time etc.)
• I/O Status Info (outstanding I/O requests, I/O devices held, etc.)
• List of Open Files
• Accounting Info.: This information includes the amount of CPU and
real time used, time limits, account members, job or process numbers, and so on.

If we have a single processor in our system, there is only one running process at a time.
Other ready processes wait for the processor.

Operating System (BCS401) Page 77


In multiprogramming systems, the processor can be switched from one process to another.
Note that when an interrupt occurs, PC and register contents for the running process (which
is being interrupted) must be saved so that the process can be continued correctly
afterwards. Switching between processes occurs as depicted below.

LECTURE 20

Operations on process:
A. Process Creation

● Parent process create children processes, which, in turn create other processes,
forming a tree of processes

● Resource sharing
● Parent and children share all resources
● Children share subset of parent’s resources

i. Parent and child share no resources

● Execution
i. Parent and children execute concurrently
B. Parent waits until children terminate
C. Process Termination
● Process executes last statement and asks the operating system to delete it (exit)
i. Output data from child to parent (via wait)
ii. Process’ resources are deallocated by operating system
● Parent may terminate execution of children processes (abort)
i. Child has exceeded allocated resources
ii. Task assigned to child is no longer required
iii. If parent is exiting
Some operating system do not allow child to continue if
its parent terminates

Process Address Space:


● The process address space consists of the linear address range presented to each
process. Each process is given a flat 32- or 64-bit address space, with the size

Operating System (BCS401) Page 78


depending on the architecture. The term "flat" describes the fact that the address
space exists in a single range. (As an example, a 32-bit address space extends from
the address 0 to 429496729.)
● Some operating systems provide a segmented address space, with addresses
existing not in a single linear range, but instead in multiple segments. Modern
virtual memory operating systems generally have a flat memory model and not a
segmented one.

● A memory address is a given value within the address space, such as 4021f000. The
process can access a memory address only in a valid memory area. Memory areas
have associated permissions, such as readable, writable, and executable, that
the associated process must respect. If a process accesses a memory address not in
a valid memory area, or if it accesses a valid area in an invalid manner, the kernel
kills the process with the dreaded "Segmentation Fault" message.

● Memory areas can contain all sorts of goodies, such as


⮚ A memory map of the executable file's code, called the text section.
⮚ A memory map of the executable file's initialized global variables, called
the data section.
⮚ A memory map of the zero page (a page consisting of all zeros, used for
purposes such as this) containing uninitialized global variables, called the
bss section
⮚ A memory map of the zero page used for the process's user-space stack (do
not confuse this with the process's kernel stack, which is separate and
maintained and used by the kernel)
⮚ An additional text, data, and bss section for each shared library, such as the
C library and dynamic linker, loaded into the process's address space.
⮚ Any memory mapped files
⮚ Any shared memory segments
⮚ Any anonymous memory mappings, such as those associated with malloc().

Operating System (BCS401) Page 79


Process Identification Information
● Process Identifier (process ID or PID) is a number used by most
operating system kernels (such as that of UNIX, Mac OS X or Microsoft
Windows) to temporarily uniquely identify a process.
● This number may be used as a parameter in various function calls allowing
processes to be manipulated, such as adjusting the process's priority or
killing it altogether.
● In Unix-like operating systems, new processes are created by the fork()
system call. The PID is returned to the parent enabling it to refer to the child
in further function calls. The parent may, for example, wait for the child to
terminate with the waitpid() function, or terminate the process with kill().
LECTURE:21
Threads:
Introduction to Thread:

● A thread is a basic unit of CPU


utilization; it comprises a thread ID, a
program counter, a register set, and a
stack. It shares with other threads
belonging to the same process its code
section, data section, and other
operating-system resources, such as
open files and signals.
● A traditional process has a single thread
of control. If a process has multiple
threads of control, it can perform more
than one task at a time.
Benefits of Threads:

A. Responsiveness. Multithreading an interactive application may allow a program to


continue running even if part of it is blocked or is performing a lengthy operation.

Operating System (BCS401) Page 80


B. Resource sharing. Threads share the memory and the resources of the process to
which they belong. The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address space.
C. Economy. Allocating memory and resources for process creation is costly. Because
threads share resources of the process to which they belong, it is more economical to
create and context- switch threads.

D. Utilization of multiprocessor architectures. The benefits of multithreading can be greatly


increased in a multiprocessor architecture, where threads may be running in parallel on
different processors. A single threaded process can only run on one CPU, no matter how
many are available.

E. Multithreading Models (Management of Threads):


F. Threads may be provided either at the user level, for user threads, or by the kernel, for
kernel threads. User threads are supported above the kernel and are managed without
kernel support, whereas kernel threads are supported and managed directly by the
operating system. There must exist a relationship between user threads and kernel
threads. There are three common ways of establishing this relationship.
A. Many-to-One Model:

The many-to-one model maps many user-level


threads to one kernel thread. Thread management is
done by the thread library in user space, so it is
efficient; but the entire process will block if a
thread makes a blocking system call. Also, because
only one thread can access the kernel at a time,
multiple threads are unable to run in parallel on
multiprocessors.
B. One-to-One Model:

The one-to-one model maps each user thread to a


kernel thread. It provides more concurrency than

Operating System (BCS401) Page 81


the many-to-one model by allowing another thread
to run when a thread makes a blocking system call; it
also allows multiple threads to run in parallel on
multiprocessors. The only drawback to this model
is that creating a user thread requires creating the
corresponding kernel thread.
C. Many-to-Many Model:

The many-to-many model multiplexes many user-level threads to a smaller or


equal number of kernel threads. The one-to-one model allows for greater
concurrency.
The many-to-many model suffers from neither of these shortcomings:
Developers can create as many user
threads as necessary, and the
corresponding kernel threads can run in
parallel on a multiprocessor. Also, when a
thread performs a blocking system call, the
kernel can schedule another thread for
execution.

LECTURE 22

CPU Scheduling Concept:


The main objective of CPU Scheduling is to maximize CPU utilization. Basically we use
process scheduling to maximize CPU utilization. Process Scheduling is done by following
ways:
CPU-I/O Burst Cycle:
The success of CPU scheduling depends on
an observed property of processes: Process
execution consists of a cycle of CPU

Operating System (BCS401) Page 82


execution and I/O wait. Processes alternate
between these two states. Process execution
begins with a CPU burst. That is followed by
an I/O burst, which is followed by another
CPU burst, then another I/O burst, and so on.
Eventually, the final CPU burst ends with a
system request to terminate execution.

Operating System (BCS401) Page 83


Scheduling Queue: queue is generally stored as a linked list. A ready-queue header contains
pointers to the first and final PCBs in the list. Each PCB includes a pointer field that points to
the next PCB in the ready queue.
Job queue – set of all processes in the system

Ready queue – set of all processes residing in main memory, ready


and waiting to execute

Device queues – set of processes waiting for an I/O device

Processes migrate among the various queues

Schedulers: A process migrates among the various scheduling queues throughout its
lifetime. The operating system must select, for scheduling purposes, processes from
Medium-Term Scheduler:

Some operating systems, such as time-sharing systems, may introduce an


additional, intermediate level of scheduling. medium-term scheduler is that sometimes
it can be advantageous to remove processes from memory (and from active contention
for the CPU) and thus reduce the degree of multiprogramming. Later, the process can be
reintroduced into memory, and its execution can be continued where it left off. This
scheme is called swapping.
Dispatcher:
Dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler. This function involves:
● Switching Context
o When CPU switches to another process, the system must save the state of
the old process and load the saved state for the new process.
o Context-switch time is overhead; the system does no useful work while
switching.
o Time dependent on hardware support.
● Switching to user mode
● Jumping to the proper location in the user program to restart that program.
Operating System (BCS401) Page 84
The dispatcher should be as fast as possible, given that it is invoked during every
process switch. The time it takes for the dispatcher to stop one process and start another
running is known as dispatch latency.
Q. Write short note on Preemptive Scheduling & Non-Preemptive Scheduling:
Ans. Preemptive scheduling: The preemptive scheduling is prioritized. The highest
priority process should always be the process that is currently utilized.
Non-Preemptive scheduling: When a process enters the state of running, the state of
that process is not deleted from the scheduler until it finishes its service time.
Non-Preemptive Scheduling may be of switching from running to waiting state, running
to ready state, waiting to ready states, process terminates; while others are preemptive.

Scheduling Performance Criteria:


● CPU Utilization: We want to keep the CPU as busy as possible. Conceptually,
CPU utilization can range from 0 to 100 percent. In a real system, it should
range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily
used system)
● Throughput: the number of processes that are completed per time unit, called
throughput.
● Turnaround Time: The amount of time to execute a particular
process is called turnaround time.
● Waiting Time: the amount of time that a process spends waiting in the ready queue.
● Response Time: time from the submission of a request until the first
response is produced. This measure, called response time.
⮚ Optimization Criteria:
⮚ Max CPU utilization
⮚ Max throughput
⮚ Min turnaround time
⮚ Min waiting time
⮚ Min response time

Operating System (BCS401) Page 85


Scheduling Algorithms:

A. First-Come, First-Served Scheduling


B. Shortest-Job-First Scheduling
C. Priority Scheduling
D. Round-Robin Scheduling
E. Multilevel Queue Scheduling
F. Multilevel Feedback Queue Scheduling
G. Multiple Processor Scheduling
H. Real Time Scheduling

A. First-Come, First-Served Scheduling (FCFS):

With this scheme, the process that requests the CPU first is allocated the CPU first. The
implementation of the FCFS policy is easily managed with a FIFO queue. When a
process enters the ready queue, its PCB is linked onto the tail of the queue. When the
CPU is free, it is allocated to the process at the head of the queue. The running process
is then removed from the queue.
Example:
Process p1,p2,p3,p4,p5 having arrival time of 0,2,3,5,8 microseconds and processing
time 3,3,1,4,2 microseconds, Draw Gantt Chart & Calculate Average Turn Around
Time, Average Waiting Time, CPU Utilization & Throughput using FCFS.
Processes Arrival Time Processing Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 0 3 3-0=3 3-3=0
P2 2 3 6-2=4 4-3=1
P3 3 1 7-3=4 4-1=3
P4 5 4 11-5=6 6-4=2
P5 8 2 13-8=5 5-2=3
GANTT CHART:

Operating System (BCS401) Page 86


P1 P2 P3 P4 P5
0 3 6 7 11 13
Average T.A.T. =(3+4+4+6+5)/5 = 22/5 = 4.4 Microsecond
Average W.T. = (0+1+3+2+3)/5 =9/5 = 1.8 Microsecond
CPU Utilization = (13/13)*100 = 100%
Throughput = 5/13 = 6.38

LECTURE 23
B. Shortest-Job-First Scheduling (SJF):

● Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time
● Two schemes:
i. nonpreemptive – once CPU given to the process it cannot be preempted
until completes its CPU burst
ii. preemptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This scheme is
known as the Shortest-Remaining-Time-First (SRTF)
● SJF is optimal – gives minimum average waiting time for a given set of processes
Example:
Process p1,p2,p3,p4 having burst time of 6,8,7,3 microseconds. Draw Gantt Chart &
Calculate Average Turn Around Time, Average Waiting Time, CPU Utilization &
Throughput using SJF.
Processes Burst Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P4 3 3-0=3 3-3=0
P1 6 9-0=9 9-6=3
P3 7 16-0=16 16-7=9
P2 8 24-0=24 24-8=16
GANTT CHART

Operating System (BCS401) Page 87


P4 P1 P3 P2
0 3 9 16 24
Average T.A.T. =(3+9+16+24)/4 = 13
microsecond Average W.T. =
(0+3+9+16)/4 =28/4 = 7 microsecond
CPU Utilization = (24/24)*100 = 100%
Throughput
Example: Example of Non-Preemptive SJF

Example: Example of Preemptive SJF

Operating System (BCS401) Page 88


LECTURE 24

Priority Scheduling:
● A priority number (integer) is associated with each process
● The CPU is allocated to the process with the highest priority
(smallest integer highest priority)
● Problem Starvation – low priority processes may never execute
● Solution Aging – as time progresses increase the priority of the process

Example: Process p1,p2,p3,p4,p5 having burst time of 10,1,2,1,5 microseconds


and priorities are 3,1,4,5,2. Draw Gantt Chart & Calculate Average Turn Around Time,
Average Waiting Time, CPU Utilization & Throughput using Priority Scheduling.

Processes Priority Processing Time T.A.T. W.T.


T(P.C.)-T(P.S.) TAT- T(Proc.)
P2 1 1 1-0=1 1-1=0
P5 2 5 6-0=6 6-5=1
P1 3 10 16-0=16 16-10=6
P3 4 2 18-0=18 18-2=16
P4 5 1 19-0=19 19-1=18

GANTT CHART:

P2 P5 P1 P3 P4
0 1 6 16 18 19

Average T.A.T. =(1+6+16+18+19)/5 = 12 microsecond


Average W.T. = (0+1+6+16+18)/5 =41/5 = 8.2 microsecond
CPU Utilization = (19/19)*100 = 100%
Throughput = 5/19

Operating System (BCS401) Page 89


LECTURE 25

Round-Robin Scheduling:
● Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to
the end of the ready queue.If there are n processes in the ready queue and the
time quantum is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q time units.
● Used for time sharing & multiuser O.S.
● FCFS with preemptive scheduling.

Example:
Process p1,p2,p3 having processing time of 24,3,3 milliseconds.
Draw Gantt Chart & Calculate Average Turn Around Time, Average Waiting
Time, CPU Utilization & Throughput using Round Robin with time slice of
4milliseconds.
Processes Processing T.A.T. W.T.
Time
T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 24 30-0=30 30-24=6
P2 3 7-0=7 7-3=4

Operating System (BCS401) Page 90


P3 3 10-0=10 10-3=7

GANTT CHART

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
Average T.A.T. =(30+7+7)/3 = 44/3 = 14.67 millisecond
Average W.T. = (6+4+7)/3 =17/3 = 5.67 millisecond

CPU Utilization = (30/30)*100 = 100%


Throughput = 3/30=0.1

C. Multilevel Queue Scheduling


● Ready queue is partitioned into separate queues: foreground (interactive)
background (batch)
● Each queue has its own scheduling algorithm foreground – RR
background – FCFS
● Scheduling must be done between the queues
Fixed priority scheduling; (i.e., serve all from foreground
then from background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e. 80% to foreground in RR, 20% to
background in FCFS

D. Multilevel Feedback Queue Scheduling


● A process can move between the various queues; aging can be implemented this way
● Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter when

Operating System (BCS401) Page 91


that process needs service
Example of Multilevel Feedback Queue
● Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
● Scheduling
A new job enters queue Q0 which is served FCFS. When it gains CPU,
job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is
moved to queue Q1.

At Q1 job is again served FCFS and receives 16 additional milliseconds.


If it still does not complete, it is preempted and moved to queue Q2.

E. Multiple Processor Scheduling


● CPU scheduling more complex when multiple CPUs are available
● Homogeneous processors within a multiprocessor
● Load sharing
● Asymmetric multiprocessing – only one processor accesses the system data
structures

F. Real Time Scheduling


● Hard real-time systems – required to complete a critical task within a
guaranteed amount of time

● Soft real-time computing – requires that critical processes receive priority


over less fortunate ones.

Problem 1: Process p1,p2,p3 having burst time of 24,3,3 microseconds. Draw Gantt
Chart & Calculate Average Turn Around Time, Average Waiting Time, CPU Utilization
& Throughput using FCFS.
[Ans. Average TAT = 27 microsecond, Average WT = 17 microseconds, CPU

Operating System (BCS401) Page 92


Utilization = 100%, Throughput = 0.1]
Problem 2: Consider the set of process A,B,C,D,E having arrival time of 0,2,3,3.5,4
and execution time 4,7,3,3,5 and the following scheduling algorithms:
a. FCFS
b. Round Robin (quantum=2)
c. Round Robin (quantum=1)
If there is tie within the processes, the tie is broken in the favour of the oldest process
i) draw the GANTT Chart and find the average waiting time & response time for the
algorithms. Comment on your result which one is better and why?

ii) If
the scheduler takes 0.2 unit of CPU Time in context switch for the completed job & o.1
unit of additional CPU time for incomplete jobs for saving their context, calculate the
percentage of CPU time wasted in each case.
Problem 3: Processes A,B,C,D,E having arrival time 0,0,1,2,2 and execution time
10,2,3,1,4 and priority 3,1,3,5,2. Draw the Gantt Chart and find average waiting time
and response time of the process set.

Problem 4: Process p1,p2,p3 having burst time 7,3,9 and priority 1,2,3 and arrival time
0,4,7.
Calculate turn around time and average waiting time using
i) SJF
ii) priority. (both preemptive)

problem 5: Process p1,p2,p3,p4 having arrival time 0,1,2,3 and burst time 8,4,9,5.
Calculate turn around time and waiting time using SJF, FCFS.

LECTURE 26
Deadlock: A set of blocked processes each holding a resource and waiting to acquire a resource
held by another process in the set.

Operating System (BCS401) Page 93


Deadlock Problem: Bridge Crossing Example

a) Traffic only in one direction.

b) Each section of a bridge can be viewed as a resource.


c) If a deadlock occurs, rollback).

It can be resolved if one car backs up (preempt resources and


d) Several cars may have to be backed up if a deadlock occurs.
e) Starvation is possible.

System Model:
A system consists of a finite number of resources to be distributed among a number
of competing processes. The resources are partitioned into several types, each
consisting of some
number of identical instances. Resources cycles, files, and
are like Memory space, CPU devices (such
as printers and DVD drives).

Operating System (BCS401) Page 94


If a system has two CPUs, then the resource type CPU has two instances. Similarly, the
resource type printer may have five instances.
● Resource types R1, R2, . . ., Rm ( CPU cycles, memory space, I/O devices)
● Each resource type Ri has Wi instances.
● Each process utilizes a resource as follows:
i. Request: If the request cannot be granted immediately (for example, if
the resource is being used by another process), then the requesting
process must wait until it can acquire the resource.
ii. Use: The process can operate on the resource (for example, if the
resource is a printer, the process can print on the printer).
iii. Release: The process releases the resource.

Deadlock Characterization: Deadlock can arise if four conditions hold


simultaneously.
i. Mutual exclusion: only one process at a time can use a resource.
ii. Hold and wait: a process holding at least one resource is waiting to acquire
additional resources held by other processes.
iii. No preemption: a resource can be released only voluntarily by the process
holding it, after that process has completed its task.
iv. Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that
P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is
held by P2, …, Pn–1 is waiting for a resource that is held by Pn,
and P0 is waiting for a resource that is held by P0.

Operating System (BCS401) Page 95


Resource-Allocation Graph:
Deadlocks can be described more precisely in terms of a directed graph called a
system resource-allocation graph. A set of vertices V and a set of edges E.
● V is partitioned into two types:
i. P = {P1, P2, …, Pn}, the
set consisting of all the
processes in the system.
ii. R = {R1, R2, …, Rm}, the
set consisting of all
resource types in the
system.
● request edge – directed edge P1 Rj
● assignment edge – directed edge Rj Pi

Example of a Resource Allocation Graph Resource Allocation Graph With A


Deadlock

LECTURE 27

Deadlock Prevention: Restrain the ways request can be made


i. Mutual Exclusion – not required for sharable resources; must hold for
nonsharable resources.
ii. Hold and Wait – must guarantee that whenever a process requests a resource, it
does not hold any other resources.
● Require process to request and be allocated all its resources before it
begins execution, or allow process to request resources only when the
process has none.
● Low resource utilization; starvation possible.
iii. No Preemption –
● If a process that is holding some resources requests another resource that

Operating System (BCS401) Page 96


cannot be immediately allocated to it, then all resources currently being
held are released.

● Preempted resources are added to the list of resources for which the
process is waiting.

● Process will be restarted only when it can regain its old resources, as well
as the new ones that it is requesting.

iv. Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.
LECTURE 27

Deadlock Avoidance: Requires that the system has some additional a priori information
available.
● Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
● The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition.
● Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes.
A. Safe State:
● When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state.
● System is in safe state if there exists a
sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for each
Pi, the resources that Pi can still request can
be satisfied by currently available resources
+ resources held by all the Pj, with j < i.
● That is:
i. If Pi resource needs are not
immediately available, then Pi can
wait until all Pj have finished.
ii. When Pj is finished, Pi can obtain
needed resources, execute, return

Operating System (BCS401) Page 97


allocated resources, and terminate.
iii. When Pi terminates, Pi +1 can obtain its needed resources, and so on.
● If a system is in safe state no deadlocks.
● If a system is in unsafe state possibility of deadlock.
● Avoidance ensure that a system will never enter an unsafe state.

B. Avoidance Algorithm
A. Single instance of a resource type: Use a resource-allocation graph
B. Multiple instances of a resource type: Use the banker’s algorithm

Resource-Allocation Graph Scheme


● Claim edge Pi Rj indicated that process Pj may request resource Rj;
represented by a dashed line.
● Claim edge converts to request edge when a process requests a resource.
● Request edge converted to an assignment edge when the resource is allocated to
the process.

● Suppose that process Pi requests a resource Rj


● The request can be granted only if converting the request edge to an assignment edge does
not result in the formation of a cycle in the resource allocation graph

● Banker’s Algorithm
Deadlock Detection
In this environment, the system must provide:
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred
• An algorithm to recover from the deadlock.

A. Single Instance of Each Resource Type

B. Several Instances of a Resource Type


i. Available: A vector of length m indicates the number of available resources of each
type.
ii. Allocation: An n x m matrix defines the number of resources of each
type currently allocated to each process.
iii. Request: An n x m matrix indicates the current request of each process. If Request

Operating System (BCS401) Page 98


[ij] =
k, then process Pi is requesting k more instances of resource type. Rj.

LECTURE 28

Recovery from Deadlock:

A. Process Termination:

● Abort all deadlocked processes.


● Abort one process at a time until the deadlock cycle is eliminated.
Some Other factors are:
1. What the priority of the process is?
2. How long the process has computed and how much longer the process will compute
before completing its designated task?
3. How many and what type of resources the process has used? (for
example, whether the resources are simple to preempt)
4. How many more resources the process needs in order to complete?
5. How many processes will need to be terminated?
6. Whether the process is interactive or batch?

B. Resource Preemption:

1. Selecting a victim: Which resources and which processes are to be preempted?


2. Rollback: If it cannot continue with its normal execution; it is missing some needed
resource. We must roll back the process to some safe state and restart it
from that state.
3. Starvation: we guarantee that resources will not always be preempted from the same
process?

Operating System (BCS401) Page 99


IMPORTANT QUESTIONS

1 Explain threads
2 What do you understand by Process? Explain various states of process with suitable diagram. Explain process
control block.
3 What is a deadlock? Discuss the necessary conditions for deadlock with examples
4 Describe Banker’s algorithm for safe allocation.
5 What are the various scheduling criteria for CPU scheduling
6 What is the use of inter process communication and context switching
7 Discuss the usage of wait-for graph method
8
Consider the following snapshot of a system:

Maximu Availabl
Allocated m e

Process R1 R2 R3 R1 R2 R3 R1 R2 R3

P1 2 2 3 3 6 8 7 7 10

P2 2 0 3 4 3 3

P3 1 2 4 3 4 4

Answer the following questions using the banker’s algorithm:

1) What is the content of the matrix need?


2) Is the system in a safe state?

9 Is it possible to have a deadlock involving only a single process? Explain


10 Describe the typical elements of the process control block.

11 What are the various scheduling criteria for CPU scheduling?


12 What is the safe state and an unsafe state ?
13 Define Process. Explain various steps involved in change of a process state with neat transition diagram.

Operating System (BCS401) Page 100


14
Consider the following
process:

Process Arrival Burst


Time Time
P1 0 8

P2 1 4

P3 2 9

P4 3 5

What is the average waiting and turn around time for these process with:

FCFS Scheduling
Preemptive SJF Scheduling

Operating System (BCS401) Page 101


15
Consider the following
process:

Process Arrival Burst


Time Time
P1 0 8

P2 1 4

P3 2 9

P4 3 5

Draw Gantt chart and find the average waiting time and average turnaround time:

iii. FCFS Scheduling


.SRTF Scheduling

Consider the following process:

Process Arrival Burst Priority


Time Time
0 6 3
P1
P2 1 4 1

P3 2 5 2

P4 3 8 4

Draw Gantt chart and find the average waiting time and average turnaround time:
(i) SRTF Scheduling
(ii Round robin (time
) quantum:3)

16 What is the need for Process Control Block (PCB)?


17 Draw process state transition diagram
18
Define the multilevel feedback queues scheduling.
19 Discuss the performance criteria for CPU Scheduling.

Operating System (BCS401) Page 102


UNIT-4 (Memory Management)

Memory Management: Basic bare machine, Resident monitor, Multiprogramming with fixed
partitions, Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged
segmentation, Virtual memory concepts, Demand paging, Performance of demand paging, Page
replacement algorithms, Thrashing, Cache memory organization, Locality of reference

LECTURE 29

What is MEMORY MANAGEMENT ?


In a uni-programming system, main memory is divided into two parts: one part for the
operating system (resident monitor, kernel) and one part for the user program currently
being executed.
In a multiprogramming system, the “user” part of memory must be further subdivided to
accommodate multiple processes. The task of subdivision is carried out dynamically by the
operating system and is known as memory management.
Binding of Instructions and Data to Memory

Address binding of instructions and data to memory addresses can happen at three different
stages.
1. Compile time: The compile time is the time taken to compile the program or source
code. During compilation, if memory location known a priori, then it generates
absolute codes.
2. Load time: It is the time taken to link all related program file and load into the main
memory. It must generate relocatable code if memory location is not known at
compile time.
3. Execution time: It is the time taken to execute the program in main memory by
processor. Binding delayed until run time if the process can be moved during its
execution from one memory segment to another. Need hardware support for address
maps (e.g., base and limit registers).

Operating System (BCS401) Page 103


(Multi step processing of a user program.)
Fig:4.1(processing of user program)
Logical- Versus Physical-Address Space

An address generated by the CPU is commonly referred to as a logical address or a virtual


address whereas an address seen by the main memory unit is commonly referred to as a
physical address.
● The set of all logical addresses generated by a program is a logical-address space
whereas the set of all physical addresses corresponding to these logical addresses is
a physical address space.
● Logical and physical addresses are the same in compile-time and load-time address
binding schemes; logical (virtual) and physical addresses differ in execution-time
address binding scheme.
● The Memory Management Unit is a hardware device that maps virtual to physical
address. In MMU scheme, the value in the relocation register is added to every
address generated by a user process at the time it is sent to memory as follows:

Operating System (BCS401) Page 104


Fig:4.2(Dynamic relocation using a relocation register)

Dynamic Loading
● It loads the program and data dynamically into physical memory to obtain better
memory- space utilization.
● With dynamic loading, a routine is not loaded until it is called.
● The advantage of dynamic loading is that an unused routine is never loaded.
● This method is useful when large amounts of code are needed to handle infrequently
occurring cases, such as error routines.
● Dynamic loading does not require special support from the operating system.

Dynamic Linking
● Linking postponed until execution time.
● Small piece of code (stub) used to locate the appropriate memory-resident library routine.
● Stub replaces itself with the address of the routine and executes the routine.
● Operating system needed to check if routine is in processes memory address.
● Dynamic linking is particularly useful for libraries.

LECTURE 30
Overlays
● Keep in memory only those instructions and data that are needed at any given time.
● Needed when process is larger than amount of memory allocated to it.
● Implemented by user, no special support needed from operating system, programming
design of overlay structure is complex.

Operating System (BCS401) Page 105


Swapping
● A process can be swapped temporarily out of memory to a backing store (large disc),
and then brought back into memory for continued execution.
● Roll out, roll in: A variant of this swapping policy is used for priority-based
scheduling algorithms. If a higher-priority process arrives and wants service, the
memory manager can swap out the lower-priority process so that it can load and
execute the higher-priority process. When the higher-priority process finishes, the
lower-priority process can be swapped back in and continued. This variant of
swapping is called roll out, roll in.
● Major part of swap time is transfer time; total transfer time is directly proportional to
the amount of memory swapped. ⇒Modified versions of swapping are found on
many systems (UNIX, Linux, and Windows).

Operating System (BCS401) Page 106


Fig:4.3

LECTURE:31

MEMORY ALLOCATION

The main memory must accommodate both the operating system and the various user processes.
We need to allocate different parts of the main memory in the most efficient way possible. The
main memory is usually divided into two partitions: one for the resident operating system, and
one for the user processes. We may place the operating system in either low memory or high
memory. The major factor affecting this decision is the location of the interrupt vector. Since the
interrupt vector is often in low memory, programmers usually place the operating system in low
memory as well.

There are following two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Noncontiguous memory allocation

1. Contiguous Memory Allocation- Here, all the processes are stored in contiguous memory
locations. To load multiple processes into memory, the Operating System must divide
memory into multiple partitions for those processes.

2. Hardware Support- The relocation-register scheme used to protect user processes from
each other, and from changing operating system code and data. Relocation register
contains value of smallest physical address of a partition and limit register contains range
of that partition.

Each logical address must be less than the limit register.

Fig:4.4 (Hardware support for relocation and limit registers)

According to size of partitions, the multiple partition schemes are divided into two types:

Operating System (BCS401) Page 107


i. Multiple fixed partition/ multiprogramming with fixed task(MFT)
ii. Multiple variable partition/ multiprogramming with variable task(MVT)

i. Multiple fixed partitions- Main memory is divided into a number of static


partitions at system generation time. In this case, any process whose size is less than or
equal to the partition size can be loaded into any available partition. If all partitions are
full and no process is in the Ready or Running state, the operating system can swap a
process out of any of the partitions and load in another process, so that there is some
work for the processor.
Advantages:
● Simple to implement and little operating system overhead.
Disadvantage:
● Inefficient use of memory due to internal fragmentation.
● Maximum number of active processes is fixed.

ii. Multiple Variable Partitions- With this partitioning, the partitions are of
variable length and number. When a process is brought into main memory, it is allocated
exactly as much memory, as it requires and no more.
Advantages:
● No internal fragmentation and more efficient use of main memory.
Disadvantages:
● Inefficient use of processor due to the need for compaction to counter external
fragmentation.
iii. Partition Selection policy- When the multiple memory holes (partitions) are
large enough to contain a process, the operating system must use an algorithm to select
in which hole the process will be loaded. The partition selection algorithm are as
follows:
● First-fit: The OS looks at all sections of free memory. The process is allocated to
the first hole found that is big enough size than the size of process.
● Next Fit: The next fit search starts at the last hole allocated and The process is
allocated to the next hole found that is big enough size than the size of process.
● Best-fit: The Best Fit searches the entire list of holes to find the smallest hole
that is big enough size than the size of process.
● Worst-fit: The Worst Fit searches the entire list of holes to find the largest hole
that is big enough size than the size of process.

Operating System (BCS401) Page 108


Fragmentation- The wasting of memory space is called fragmentation. There are two types of
fragmentation as follows:

1. External Fragmentation- The total memory space exists to satisfy a request, but it is not
contiguous. This wasted space not allocated to any partition is called external fragmentation.
The external fragmentation can be reduce by compaction. The goal is to shuffle the memory
contents to place all free memory together in one large block. Compaction is possible only if
relocation is dynamic, and is done at execution time.

2. Internal Fragmentation- The allocated memory may be slightly larger than requested
memory. The wasted space within a partition is called internal fragmentation. One method to
reduce internal fragmentation is to use partitions of different size.

3. Noncontiguous Memory Allocation- In noncontiguous memory allocation, it is allowed to store the


processes in noncontiguous memory locations. There are different techniques used to load
processes into memory, as follows:

LECTURE 32

PAGING
Main memory is divided into a number of equal-size blocks, are called frames. Each process is
divided into a number of equal-size block of the same length as frames, are called Pages. A
process is loaded by loading all of its pages into available frames (may not be contiguous).

Fig:4.5 (Paging hardware)

Operating System (BCS401) Page 109


Process of Translation from logical to physical addresses
● Every address generated by the CPU is divided into two parts: a page number (p) and a
page offset (d). The page number is used as an index into a page table.
● The page table contains the base address of each page in physical memory. This base
address is combined with the page offset to define the physical memory address that is sent
to the memory unit.
● If the size of logical-address space is 2m and a page size is 2n addressing units (bytes or
words), then the high-order (m – n) bits of a logical address designate the page number and
the n low-order bits designate the page offset. Thus, the logical address is as follows:

Where p is an index into the page table and d is the displacement within the page.
Example: Consider a page size of 4 bytes and a physical memory of 32 bytes (8 pages), we show
how the user's view of memory can be mapped into physical memory. Logical address 0 is page
0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus, logical address 0
maps to physical address 20 (= (5 x 4) + 0). Logical address 3 (page 0, offset 3) maps to physical
address 23 (= (5 x 4) + 3). Logical address 4 is page 1, offset 0; according to the page table, page
1 is mapped to frame6. Thus, logical address 4 maps to physical address 24 (= (6 x 4) + 0).
Logical address 13 maps to physical address 9(= (2 x 4)+1).

Operating System (BCS401) Page 110


Fig:4.6
Hardware Support for Paging:
Each operating system has its own methods for storing page tables. Most operating systems
allocate a page table for each process. A pointer to the page table is stored with the other
register values (like the instruction counter) in the process control block. When the dispatcher is
told to start a process, it must reload the user registers and define the correct hardware page
table values from the stored user page table.

Implementation of Page Table


● Generally, Page table is kept in main memory. The Page Table Base Register (PTBR)
points to the page table. In addition, Page-table length register (PRLR) indicates size of
the page table.
● In this scheme every data/instruction access requires two memory accesses. One for the
page table and one for the data/instruction.
● The two-memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers (TLBs).
LECTURE 33

Paging Hardware With TLB


The TLB is an associative and high-speed memory. Each entry in the TLB consists of two
parts: a key (or tag) and a value. The TLB is used with page tables in the following way.
● The TLB contains only a few of the page-table entries. When the CPU Generates a
logical address, its page number is presented to the TLB.
● If the page number is found (known as a TLB Hit), its frame number is immediately
available and is used to access memory. It takes only one memory access.
● If the page number is not in the TLB (known as a TLB miss), a memory reference to
the page table must be made. When the frame number is obtained, we can use it to
access memory. It takes two memory accesses.
● In addition, it stores the page number and frame number to the TLB, so that they will
be found quickly on the next reference.
● If the TLB is already full of entries, the operating system must select one for
replacement by using replacement algorithm.

Operating System (BCS401) Page 111


Fig:4.6 (Paging hardware with TLB)

The percentage of times that a particular page number is found in the TLB is called the
hit ratio. The effective access time (EAT) is obtained as follows:

EAT= HR x (TLBAT + MAT) + MR x (TLBAT + 2 x MAT)


Where HR: Hit Ratio, TLBAT: TLB access time, MAT: Memory access time, MR: Miss
Ratio.

LECTURE:34

Memory protection in Paged Environment:


● Memory protection in a paged environment is accomplished by protection bits that
are associated with each frame. These bits are kept in the page table.
● One bit can define a page to be read-write or read-only. This protection bit can be
checked to verify that no writes are being made to a read-only page. An attempt to
write to a read-only page causes a hardware trap to the operating system (or memory-
protection violation).
● One more bit is attached to each entry in the page table: a valid-invalid bit. When this
bit is set to "valid," this value indicates that the associated page is in the process'
logical address. space, and is a legal (or valid) page. If the bit is set to "invalid," this
value indicates that the page is not in the process' logical-address space.
● Illegal addresses are trapped by using the valid-invalid bit. The operating system
sets this bit for each page to allow or disallow accesses to that page.

Operating System (BCS401) Page 112


Fig:4.7 (Valid (v) or invalid (i) bit in a page table)

Structure of the Page Table


There are different structures of page table described as follows:
1. Hierarchical Page Table- When the number of pages is very high, then the page
table takes large amount of memory space. In such cases, we use multilevel paging
scheme for reducing size of page table. A simple technique is a two-level page table.
Since the page table is paged, the page number is further divided into parts: page
number and page offset. Thus, a logical address is as follows:

Where pi is an index into the outer page table, and p2 is the displacement within the page
of the outer page table.

Two-Level Page-Table Scheme:

Operating System (BCS401) Page 113


Fig :4.8 (Address translation scheme for a two-level paging architecture)

2. Hashed Page Tables- This scheme is applicable for address space larger than 32bits.
In this scheme, the virtual page number is hashed into a page table. This page table
contains a chain of elements hashing to the same location. Virtual page numbers are
compared in this chain searching for a match. If a match is found, the corresponding
physical frame is extracted.

Operating System (BCS401) Page 114


Fig:4.9
3. Inverted Page Table-
● One entry for each real page of memory.
● Entry consists of the virtual address of the page stored in that real memory Location,
with information about the process that owns that page.
● Decreases memory needed to store each page table, but increases time needed to search
the table when a page reference occurs.

Fig:4.9
Shared Pages

Shared code
● One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
● Shared code must appear in same location in the logical address space of all processes.
Private code and data
● Each process keeps a separate copy of the code and data.
● The pages for the private code and data can appear anywhere in the logical address
space.

Operating System (BCS401) Page 115


PRACTICE PROBLEMS BASED ON PAGING AND PAGE TABLE-

Problem-01:
Calculate the size of memory if its address consists of 22 bits and the memory is 2-byte addressable.
We have-
● Number of locations possible with 22 bits = 222 locations
● It is given that the size of one location = 2 bytes

Thus, Size of memory


= 222 x 2 bytes
= 223 bytes
= 8 MB

Problem-02:

Calculate the number of bits required in the address for memory having size of 16 GB. Assume the
memory is 4-byte addressable.

Let ‘n’ number of bits are required. Then, Size of memory = 2n x 4 bytes. Since, the given memory has size of 16 GB,
so we have-
2n x 4 bytes = 16 GB
2n x 4 = 16 G
2n x 22 = 234
2n = 232
∴ n = 32 bits

Problem-03:

Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4 KB, what is the
approximate size of the page table?
Given-
● Size of main memory = 64 MB
● Number of bits in virtual address space = 32 bits
● Page size = 4 KB
We will consider that the memory is byte addressable.

Operating System (BCS401) Page 116


Number of Bits in Physical Address-

Size of main memory

= 64 MB
= 226 B
Thus, Number of bits in physical address = 26 bits
Number of Frames in Main Memory-
Number of frames in main memory
= Size of main memory / Frame size
= 64 MB / 4 KB
= 226 B / 212 B
= 214
Thus, Number of bits in frame number = 14 bits

Number of Bits in Page Offset-

We have,
Page size
= 4 KB
= 212 B
Thus, Number of bits in page offset = 12 bits
So, Physical address is-26 BITS

Process Size-

Number of bits in virtual address space = 32 bits


Thus,
Process size
= 232 B
= 4 GB

Number of Entries in Page Table-

Number of pages the process is divided


= Process size / Page size
= 4 GB / 4 KB
= 220 pages
Thus, Number of entries in page table = 220 entries

Operating System (BCS401) Page 117


Page Table Size-
Page table size
= Number of entries in page table x Page table entry size
= Number of entries in page table x Number of bits in frame number
= 220 x 14 bits
= 220 x 16 bits (Approximating 14 bits ≈ 16 bits)
= 220 x 2 bytes
= 2 MB

LECTURE 35
SEGMENTATION
Segmentation is a memory-management scheme that supports user view of memory. A program is
a collection of segments. A segment is a logical unit such as: main program, procedure, function,
method, object, local variables, global variables, common block, stack, symbol table, arrays etc.
A logical-address space is a collection of segments. Each segment has a name and a length. The
user specifies each address by two quantities: a segment name/number and an offset.
Hence, Logical address consists of a two tuple: <segment-number, offset> Segment table maps
two-dimensional physical addresses and each entry in table has: base – contains the starting
physical address where the segments reside in memory. Limit specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program.

Fig:4.10 (Diagram of Segmentation Hardware)

The segment number is used as an index into the segment table. The offset d of the
logical address must be between 0 and the segment limit. If it is not, we trap to the

Operating System (BCS401) Page 118


operating system that logical addressing attempt beyond end of segment. If this offset is
legal, it is added to the segment base to produce the address in physical memory of the
desired byte. Consider we have five segments numbered from 0 through 4. The segments
are stored in physical memory as shown in figure. The segment table has a separate entry
for each segment, giving start address in physical memory (or base) and the length of
that segment (or limit). For example, segment 2 is 400 bytes long and begins at location
4300. Thus, a reference to byte 53 of segment 2 is mapped onto location 4300 + 53 =
4353.

Fig:4.11 (Segmentation)

LECTURE 36
VIRTUAL MEMORY

Virtual memory is a technique that allows the execution of processes that may not be
completely in memory. Only part of the program needs to be in memory for execution. It
means that Logical address space can be much larger than physical address space.
Virtual memory allows processes to easily share files and address spaces, and it provides
an efficient mechanism for process creation.
Virtual memory is the separation of user logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided for programmers
when only a smaller physical memory is available. Virtual memory makes the task of
programming much easier, because the programmer no longer needs to worry about the
amount of physical memory available.

Operating System (BCS401) Page 119


Fig:4.12 (virtual memory that is larger than physical memory)

Virtual memory can be implemented via:

● Demand paging
● Demand segmentation

PRACTICE PROBLEM BASED ON SEGMENTATION-


Given below is the example of the segmentation, There are five segments numbered from
0 to 4. These segments will be stored in Physical memory as shown. There is a separate
entry for each segment in the segment table which contains the beginning entry address of
the segment in the physical memory( denoted as the base) and also contains the length of
the segment(denoted as limit).

SOLUTION:
Segment 2 is 400 bytes long and begins at location 4300. Thus in this case a reference to
byte 53 of segment 2 is mapped onto the location 4300 (4300+53=4353). A reference to
segment 3, byte 85 is mapped to 3200(the base of segment 3)+852=4052.
A reference to byte 1222 of segment 0 would result in the trap to the OS, as the length of
this segment is 1000 bytes.

Example of Segmentation

In order to comprehend how it functions, let's see an example of segmentation in


OS. Assume there are five segments, numbered 0 through 4, with segment 1 being the first
segment. All of the process segments are initially stored in the physical memory space

Operating System (BCS401) Page 120


before the process is executed. A segment table is also available. The beginning entry
address of each segment is contained in the segment table (denoted by base). The length of
each segment is also included in the segment table (denoted by limit). Assume there are
five segments, numbered 0 through 4, with segment 1 being the first segment. All of the
process segments are initially stored in the physical memory space before the process is
executed. A segment table is also available. The beginning entry address of each segment
is contained in the segment table (denoted by base). The length of each segment is also
included in the segment table (denoted by limit).
Segment 2 starts at position 4300 and is 400 bytes long. As a result, a reference to byte 53
of segment 2 in this instance is mapped onto the location 4300 (4300+53=4353). Segment
3 reference byte 85 is mapped to 3200 (the segment 3 base) +852=4052.
Segment 0 has a length of 1000 bytes, so referencing byte 1222 of that segment would
trigger an OS trap.

Fig:4.1(segmentation)

Operating System (BCS401) Page 121


LECTURE -37

DEMAND PAGING

A demand-paging system is similar to a paging system with swapping. Generally, Processes


reside on secondary memory (which is usually a disk). When we want to execute a process, we
swap it into memory. Rather than swapping the entire process into memory, it swaps the required
page. This can be done by a lazy swapper.

A lazy swapper never swaps a page into memory unless that page will be needed. A swapper
manipulates entire processes, whereas a pager is concerned with the individual pages of a
process.

Page transfer Method:

When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings only
those necessary pages into memory. Thus, it avoids reading into memory pages that will not be
used anyway, decreasing the swap time and the amount of physical memory needed.

Fig:4.13 (Transfer of a paged memory to contiguous disk space)

Page Table-
● The valid-invalid bit scheme of Page table can be used for indicating which pages are
currently in memory.
● When this bit is set to "valid", this value indicates that the associated page is both legal and
in memory. If the bit is set to "invalid", this value indicates that the page either is not valid or
is valid but is currently on the disk.

Operating System (BCS401) Page 122


● The page-table entry for a page that is brought into memory is set as usual, but the page table
entry for a page that is not currently in memory is simply marked invalid, or contains the
address of the page on disk.

Fig :4.14 (Page table when some pages are not in main memory)

When a page references an invalid page, then it is called Page Fault. It means that page is
not in main memory. The procedure for handling page fault is as follows:
1. We check an internal table for this process, to determine whether the reference
was a valid or invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we
have not yet brought in that page in to memory.
3. We find a free frame (by taking one from the free-frame list).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the
process and the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap. The
process can now access the page as though it had always been in memory.

Operating System (BCS401) Page 123


Fig:4.15 (Steps in handling a page fault)

Note: The pages are copied into memory, only when they are required. This mechanism is
called Pure Demand Paging.

Performance of Demand Paging

Let p be the probability of a page fault (0< p < 1). Then the effective access time is
Effective access time = (1 - p) x memory access time + p x page fault time
In any case, we are faced with three major components of the page-fault service time:
1. Service the page-fault interrupt.
2. Read in the page.

3. Restart the process.

LECTURE 38

PAGE REPLACEMENT
The page replacement is a mechanism that loads a page from disc to memory when a page
of memory needs to be allocated. Page replacement can be described as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:

Operating System (BCS401) Page 124


a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.
3. Read the desired page into the (newly) free frame; change the page and frame tables.
4. Restart the user process.

Fig:4.16 (Page replacement)

Page Replacement Algorithms:

The page replacement algorithms decide which memory pages to page out (swap out,
write to disk) when a page of memory needs to be allocated. We evaluate an algorithm by
running it on a particular string of memory references and computing the number of page
faults. The string of memory references is called a reference string. The different page
replacement algorithms are described as follows:

First-In-First-Out (FIFO) Algorithm:


This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue; the oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.

Example-1 Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Find the number
of page faults.

Operating System (BCS401) Page 125


Fig:4.17 (FIFO page-replacement algorithm)

● Initially, all slots are empty, so when 1, 3, 0 came they are allocated to
the empty slots —>3 Page Faults.
● When 3 comes, it is already in memory so —>0 Page Faults.
● Then 5 comes, it is not available in memory so it replaces the oldest
page slot i.e 1.—>1 Page Fault.
● 6 comes, it is also not available in memory so it replaces the oldest page slot i.e. 3 —
>1 Page Fault.
● Finally, when 3 come it is not available so it replaces 0, 1 page fault

LECTURE-39

1. Optimal Page Replacement algorithm:

In this algorithm, pages are replaced which would not be used for the longest duration
of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page
frame. Find number of page fault.

Operating System (BCS401) Page 126


Fig:4.18

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>4 Page faults
0 is already there so —>0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —>0
Page fault.. 4 will takes place
of 1 —>1 Page Fault.
Now for the further page reference string —>0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.

2. LRU Page Replacement Algorithm

In this algorithm, page will be replaced which is least recently used.

Example-3 Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4


page frames. Find number of page faults.

Operating System (BCS401) Page 127


Fig:4.19
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>4
Page faults
0 is already there so —>0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —>0
Page fault. 4 will takes place of 1
—>1 Page Fault

Now for the further page reference string —>0 Page fault because they are already
available in the memory.

LECTURE-40

LRU Approximation Page Replacement algorithm


In this algorithm, Reference bits are associated with each entry in the page table. Initially,
all bits are cleared (to 0) by the operating system. As a user process executes, the bit
associated with each page referenced is set (to 1) by the hardware. After some time, we
can determine which pages have been used and which have not been used by examining
the reference bits. This algorithm can be classified into different categories as follows:
i.Additional-Reference-Bits Algorithm-

It can keep an 8-bit(1 byte) for each page in a page table in memory. At regular intervals, a
timer interrupt transfers control to the operating system. The operating system shifts the
reference bit for each page into the high order bit of its 8-bit, shifting the other bits right
over 1 bit position, discarding the low- order bit. These 8 bits shift registers contain the
history of page use for the last eight time periods.
If we interpret these 8-bits as unsigned integers, the page with the lowest number is the
LRU page, and it can be replaced.

ii.Second-Chance Algorithm-

The basic algorithm of second-chance replacement is a FIFO replacement algorithm.


When a page has been selected, we inspect its reference bit. If the value is 0, we proceed to
replace this page. If the reference bit is set to 1, we give that page a second chance and
move on to select the next FIFO page. When a page gets a second chance, its reference bit
is cleared and its arrival time is reset to the current time. Thus, a page that is given a
second chance will not be replaced until all other pages are replaced.

Operating System (BCS401) Page 128


3. Counting-Based Page Replacement

We could keep a counter of the number of references that have been made to each page,
and develop the following two schemes.
LFU page replacement algorithm: The least frequently used (LFU) page- replacement
algorithm requires that the page with the smallest count be replaced. The reason for this
selection is that an actively used page should have a large reference count. ii. MFU page-
replacement algorithm: The most frequently used (MFU) page replacement algorithm
is based on the argument that the page with the largest count be replaced.

LECTURE-41
ALLOCATION OF FRAMES

When a page fault occurs, there is a free frame available to store new page into a frame.
While the page swap is taking place, a replacement can be selected, which is written to
the disk as the user process continues to execute. The operating system allocate all its
buffer and table space from the free-frame list for new page.
Two major allocation Algorithm/schemes.
1. Equal allocation
2. Proportional allocation

1. Equal allocation: The easiest way to split m frames among n processes is to give
everyone an equal share, m/n frames. This scheme is called equal allocation.

2. Proportional allocation: Here, it allocates available memory to each process


according to its size. Let the size of the virtual memory for process pibe si, and
define

S= ∑ Si
Then,
if the total number of available frames is m, we allocate ai frames to process pi, where ai
is approximately
ai=Si/ S x m.

Global Versus Local Allocation

We can classify page-replacement algorithms into two broad categories: global


replacement and local replacement.
Global replacement allows a process to select a replacement frame from the set of all
frames, even if that frame is currently allocated to some other process; one process can

Operating System (BCS401) Page 129


take a frame from another. Local replacement requires that each process select from only
its own set of allocated frames.

THRASHING

The system spends most of its time shuttling pages between main memory and secondary memory
due to frequent page faults. This behavior is known as thrashing.
A process is thrashing if it is spending more time paging than executing. This leads to: low CPU
utilization and the operating system thinks that it needs to increase the degree of multiprogramming.

Thrashing is when the page fault and swapping happens very frequently at a higher rate, and then
the operating system has to spend more time swapping these pages. This state in the operating
system is known as thrashing. Because of thrashing, the CPU utilization is going to be reduced or
negligible.

Fig:4.20 (Thrashing)

LECTURE 42
Cache Memory
Cache Memory is a special very high-speed memory. The cache is a smaller and faster
memory that stores copies of the data from frequently used main memory locations. There are
various different independent caches in a CPU, which store instructions and data. The most
important use of cache memory is that it is used to reduce the average time to access data from
the main memory.
Characteristics of Cache Memory
Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU. Cache Memory holds frequently requested data and instructions so that they are

Operating System (BCS401) Page 130


immediately available to the CPU when needed. Cache memory is costlier than main memory
or disk memory but more economical than CPU registers. Cache Memory is used to speed up
and synchronize with a high-speed CPU.

Fig:4.21 (Levels of Memory)

Level 1 or Register- It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator, Program
counter, Address Register, etc.
Level 2 or Cache memory- It is the fastest memory that has faster access time where data is
temporarily stored for faster access.
Level 3 or Main Memory- It is the memory on which the computer works currently. It is small
in size and once power is off data no longer stays in this memory.
Level 4 or Secondary Memory- It is external memory that is not as fast as the main memory
but data stays permanently in this memory.
Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache. If the processor finds that the memory location is in the cache,
a Cache Hit has occurred and data is read from the cache. If the processor does not find the
memory location in the cache, a cache miss has occurred. For a cache miss, the cache allocates a
new entry and copies in data from the main memory, and then the request is fulfilled from the
contents of the cache. The performance of cache memory is frequently measured in terms of a
quantity called Hit ratio.

Cache Mapping
There are three different types of mapping used for the purpose of cache memory, which is as
follows:
Direct Mapping
Associative Mapping
Set-Associative Mapping
1. Direct Mapping

Operating System (BCS401) Page 131


The simplest technique, known as direct mapping, maps each block of main memory into only
one possible cache line. or In Direct mapping, assign each memory block to a specific line in the
cache. If a line is previously taken up by a memory block when a new block needs to be loaded,
the old block is trashed. An address space is split into two parts index field and a tag field. The
cache is used to store the tag field whereas the rest is stored in the main memory. Direct
mapping`s performance is directly proportional to the Hit ratio.
2. Associative Mapping
In this type of mapping, associative memory is used to store the content and addresses of the
memory word. Any block can go into any line of the cache. This means that the word id bits are
used to identify which word in the block is needed, but the tag becomes all of the remaining bits.
This enables the placement of any word at any place in the cache memory. It is considered to be
the fastest and most flexible mapping form. In associative mapping, the index bits are zero
3. Set-Associative Mapping
This form of mapping is an enhanced form of direct mapping where the drawbacks of direct
mapping are removed. Set associative addresses the problem of possible thrashing in the direct
mapping method. It does this by saying that instead of having exactly one line that a block can
map to in the cache,we will group a few lines together creating a set. Then a block in memory
can map to any one of the lines of a specific set. Set-associative mapping allows each word that
is present in the cache can have two or more words in the main memory for the same index
address. Set associative cache mapping combines the best of direct and associative cache
mapping techniques. In set associative mapping the index bits are given by the set offset bits. In
this case, the cache consists of a number of sets, each of which consists of a number of lines.

Application of Cache Memory


Here are some of the applications of Cache Memory.
Primary Cache: A primary cache is always located on the processor chip. This cache is small
and its access time is comparable to that of processor registers.
Secondary Cache: Secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on the
processor chip.
Spatial Locality of Reference: Spatial Locality of Reference says that there is a chance that the
element will be present in close proximity to the reference point and next time if again searched
then more close proximity to the point of reference.
Temporal Locality of Reference: Temporal Locality of Reference uses the Least recently used
algorithm will be used. Whenever there is page fault occurs within a word will not only load the
word in the main memory but the complete page fault will be loaded because the spatial locality
of reference rule says that if you are referring to any word next word will be referred to in its
register that’s why we load complete page table so the complete block will be loaded.

Advantages of Cache Memory

Operating System (BCS401) Page 132


Cache Memory is faster in comparison to main memory and secondary memory.
Programs stored by Cache Memory can be executed in less time.
The data access time of Cache Memory is less than that of the main memory.
Cache Memory stored data and instructions that are regularly used by the CPU, therefore it
increases the performance of the CPU.
Disadvantages of Cache Memory
Cache Memory is costlier than primary memory and secondary memory.
Data is stored on a temporary basis in Cache Memory.
Whenever the system is turned off, data and instructions stored in cache memory get destroyed.
The high cost of cache memory increases the price of the Computer System.
Locality of Reference
Locality of reference refers to a phenomenon in which a computer program tends to access same
set of memory locations for a particular time period. In other words, Locality of
Reference refers to the tendency of the computer program to access instructions whose
addresses are near one another. The property of locality of reference is mainly shown by loops
and subroutine calls in a program.

Fig:4.22
In case of loops in a program control processing unit repeatedly refers to the set of instructions
that constitute the loop.
In case of subroutine calls, every time the set of instructions are fetched from memory.
References to data items also get localized that means same data item is referenced again and
again.

Cache Operation-

Operating System (BCS401) Page 133


It is based on the principle of locality of reference. There are two ways with which data or
instruction is fetched from main memory and get stored in cache memory. These two ways are
the following:
Temporal Locality-
Temporal locality means current data or instruction that is being fetched may be needed soon.
So we should store that data or instruction in the cache memory so that we can avoid again
searching in main memory for the same data.
Spatial Locality–
Spatial locality means instruction or data near to the current memory location that is being
fetched, may be needed soon in the near future. This is slightly different from the temporal
locality. Here we are talking about nearly located memory locations while in temporal locality
we were talking about the actual memory location that was being fetched.

IMPORTANT QUESTIONS
Q.1 What are the memory management requirements?

Q.2 Explain static partitioned allocation with partition sizes 300,150, 100, 200, 20. Assuming first fit
method indicate the memory status after memory request for sizes 80, 180, 280, 380, 30.

Q.3 Explain the difference between logical and physical addresses?

Q.4 Explain hierarchical page table and inverted page table.

Q.5 What is segmentation? Explain the basic segmentation method.

Q.6 What is virtual memory? How it is implemented.

Q.7 What is demand paging? Explain it with address translation mechanism used.

Q.8 Consider the following page reference string. 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2

Q.9 How many page faults would occur for the following replacement algorithm, assuming four and six
frames respectively?

1) LRU page replacement.


2) FIFO page replacement.
Q.10 Describe the term page fault frequency. What is thrashing? How do OS control it?
Q.11 Explain difference between internal external fragmentations in detail.
Q.12 What is swapping? Why does one need to swap areas of memory?
Q.13 Explain how segmented memory management works. Also explain in details address translation and

Operating System (BCS401) Page 134


relocation segmented memory management
Q.14 What is the purpose of a TLB? Explain the TLB lookup with the help of a block diagram, explaining
the hardware required.
Q.15 Compare and contrast the paging with segmentation. In particular, describe issues related to
fragmentation
Q.16 What is the impact of fixed partitioning on fragmentation?
Q.17 Give the relative advantages and disadvantages of load time dynamic linking and run-time dynamic
linking. Differentiate them from static linking
Q.18 What is meant by virtual memory? With the help of a block diagram explain the data structures
used.
Q.19 What is a page and what is a frame. How are the two related?
Q.20 Give description of hard-ware support to paging
Q.21 What is a page fault? What action does the OS? Take when a page fault occurs?

Operating System (BCS401) Page 135


UNIT 5 (I/O Management and Disk Scheduling)
I/O Management and Disk Scheduling: I/O devices, and I/O subsystems, I/O buffering, Disk storage
and disk scheduling, RAID. File System: File concept, File organization and access mechanism, File
directories, and File sharing, File system implementation issues, File system protection and security.

LECTURE 43
What is the need for I/O Management?
I/O Devices
One of the important jobs of an Operating System is to manage various I/O devices including
mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-mapped screen,
LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers etc.An
I/O system is required to take an application I/O request and send it to the physical device, then
take whatever response comes back from the device and send it to the application. I/O devices
can be divided into two categories
● Block devices − A block device is one with which the driver communicates by sending entire
blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc.
● Character devices − A character device is one with which the driver communicates by
sending and receiving single characters (bytes, octets). For example, serial ports, parallel
ports, sounds cards etc.

Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.
The Device Controller works like an interface between a device and a device driver. I/O units
(Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic
component where electronic component is called the device controller.
There is always a device controller and a device driver for each device to communicate with the
Operating Systems. A device controller may be able to handle multiple devices. As an interface
its main task is to convert serial bit stream to block of bytes, perform error correction as
necessary.
Any device connected to the computer is connected by a plug and socket, and the socket is
connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.

Operating System (BCS401) Page 136


Fig :5.1(device controller)
Synchronous vs Asynchronous I/O
● Synchronous I/O − In this scheme CPU execution waits while I/O proceeds
● Asynchronous I/O − I/O proceeds concurrently with CPU execution

Communication to I/O Devices


The CPU must have a way to pass information to and from an I/O device. There are three
approaches available to communicate with the CPU and Device.
● Special Instruction I/O
● Memory-mapped I/O
● Direct memory access (DMA)

Special Instruction I/O


This uses CPU instructions that are specifically made for controlling I/O devices. These
instructions typically allow data to be sent to an I/O device or read from an I/O device.
Memory-mapped I/O
When using memory-mapped I/O, the same address space is shared by memory and I/O devices.
The device is connected directly to certain main memory locations so that I/O device can transfer
block of data to/from memory without going through CPU.

Operating System (BCS401) Page 137


Fig :5.2(Memory mapped I/O)
While using memory mapped IO, OS allocates buffer in memory and informs I/O device to use
that buffer to send data to the CPU. I/O device operates asynchronously with CPU, interrupts
CPU when finished.
The advantage to this method is that every instruction which can access memory can be used to
manipulate an I/O device. Memory mapped IO is used for most high-speed I/O devices like
disks, communication interfaces.
Direct Memory Access (DMA)
Slow devices like keyboards will generate an interrupt to the main CPU after each byte is
transferred. If a fast device such as a disk generated an interrupt for each byte, the operating
system would spend most of its time handling these interrupts. So a typical computer uses direct
memory access (DMA) hardware to reduce this overhead.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to
memory without involvement. DMA module itself controls exchange of data between main
memory and the I/O device. CPU is only involved at the beginning and end of the transfer and
interrupted only after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages
the data transfers and arbitrates access to the system bus. The controllers are programmed with
source and destination pointers (where to read/write the data), counters to track the number of
transferred bytes, and settings, which includes I/O and memory types, interrupts and states for
the CPU cycles.

Operating System (BCS401) Page 138


Fig:5.3(DMA)

The operating system uses the DMA hardware as follows −


Step Description

1 Device driver is instructed to transfer disk data to a buffer address X.

2 Device driver then instruct disk controller to transfer data to buffer.

3 Disk controller starts DMA transfer.

4 Disk controller sends each byte to DMA controller.

DMA controller transfers bytes to buffer, increases the memory address,


5
decreases the counter C until C becomes zero.

6 When C becomes zero, DMA interrupts CPU to signal transfer completion.

Polling vs Interrupts I/O

A computer must have a way of detecting the arrival of any type of input. There are two ways
that this can happen, known as polling and interrupts. Both of these techniques allow the
processor to deal with events that can happen at any time and that are not related to the process it
is currently running.

Operating System (BCS401) Page 139


Polling I/O-
Polling is the simplest way for an I/O device to communicate with the processor. The process of
periodically checking status of the device to see if it is time for the next I/O operation, is called
polling. The I/O device simply puts the information in a Status register, and the processor must
come and get the information.
Most of the time, devices will not require attention and when one does it will have to wait until it
is next interrogated by the polling program. This is an inefficient method and much of the
processors time is wasted on unnecessary polls.
Compare this method to a teacher continually asking every student in a class, one after another, if
they need help. Obviously the more efficient method would be for a student to inform the teacher
whenever they require assistance.
Interrupt I/O-
An alternative scheme for dealing with I/O is the interrupt-driven method. An interrupt is a
signal to the microprocessor from a device that requires attention.
A device controller puts an interrupt signal on the bus when it needs CPU’s attention when CPU
receives an interrupt, It saves its current state and invokes the appropriate interrupt handler using
the interrupt vector (addresses of OS routines to handle various events). When the interrupting
device has been dealt with, the CPU continues with its original task as if it had never been
interrupted.

LECTURE 44
I/O Subsystems
Kernel I/O Subsystem in Operating System
The kernel provides many services related to I/O. Several services such as scheduling, caching,
spooling, device reservation, and error handling – are provided by the kernel’s I/O subsystem
built on the hardware and device-driver infrastructure. The I/O subsystem is also responsible for
protecting itself from errant processes and malicious users.

1. I/O Scheduling –
To schedule a set of I/O requests means to determine a good order in which to execute
them. The order in which the application issues the system call is the best choice.
Scheduling can improve the overall performance of the system, can share device access
permission fairly to all the processes, and reduce the average waiting time, response time,
and turnaround time for I/O to complete. OS developers implement schedules by
maintaining a wait queue of the request for each device. When an application issues a
blocking I/O system call, The request is placed in the queue for that device. The I/O
scheduler rearranges the order to improve the efficiency of the system.

2. Buffering –

Operating System (BCS401) Page 140


A buffer is a memory area that stores data being transferred between two devices or
between a device and an application. Buffering is done for three reasons.
The first is to cope with a speed mismatch between the producer and consumer of a data
stream.
2. The second use of buffering is to provide adaptation for data that have different data-
transfer sizes.
3. The third use of buffering is to support copy semantics for the application I/O, “copy
semantic” means, suppose that an application wants to write data on a disk that is stored
in its buffer. It calls the write () system’s call, providing a pointer to the buffer and the
integer specifying the number of bytes to write.

3. Caching –
A cache is a region of fast memory that holds a copy of data. Access to the cached copy
is much easier than the original file. For instance, the instruction of the currently running
process is stored on the disk, cached in physical memory, and copied again in the CPU’s
secondary and primary cache.
The main difference between a buffer and a cache is that a buffer may hold only the
existing copy of a data item, while a cache, by definition, holds a copy on faster storage
of an item that resides elsewhere.
4. Spooling and Device Reservation –
A spool is a buffer that holds the output of a device, such as a printer that cannot accept
interleaved data streams. Although a printer can serve only one job at a time, several
applications may wish to print their output concurrently, without having their output
mixes together.

The OS solves this problem by preventing all output from continuing to the printer. The
output of all applications is spooled in a separate disk file. When an application finishes
printing then the spooling system queues the corresponding spool file for output to the
printer.
5. Error Handling –
An OS that uses protected memory can guard against many kinds of hardware and
application errors so that a complete system failure is not the usual result of each minor
mechanical glitch, Devices, and I/O transfers can fail in many ways, either for transient
reasons, as when a network becomes overloaded or for permanent reasons, as when a disk
controller becomes defective.
Error Handling Strategies: Ensuring robust error handling is a critical aspect of the
Kernel I/O Subsystem to maintain the stability and reliability of the operating system.
The strategies employed for error handling involve mechanisms for detecting, reporting,
and recovering from I/O errors. Below are key components of error handling strategies
within the Kernel I/O Subsystem:

Operating System (BCS401) Page 141


1. Error Detection Mechanisms: The Kernel I/O Subsystem incorporates various
mechanisms to detect I/O errors promptly
2. Error Reporting: Once an error is detected, the Kernel I/O Subsystem employs
mechanisms to report the error to higher levels of the operating system or user
applications.
3. Error Recovery Mechanisms: Recovering from I/O errors is crucial to maintaining
system stability.
4. User Notification: Informing users or administrators about I/O errors is essential
for timely intervention and system maintenance:

User Alerts: Providing alerts to users, either through the user interface or system
notifications, can prompt immediate attention to potential issues.
Automated Notifications: Implementing automated notification systems, such as emails
or messages, to inform system administrators about critical errors for proactive system
management.
6. I/O Protection –
Errors and the issue of protection are closely related. A user process may attempt to issue illegal
I/O instructions to disrupt the normal function of a system. We can use the various mechanisms
to ensure that such disruption cannot take place in the system.

The Kernel I/O Subsystem in Operating System


An Operating System (OS) is a complex software program that manages the hardware and
software resources of a computer system. One of the critical components of an OS is the Kernel
I/O Subsystem, which provides an interface between the operating system and input/output (I/O)
devices. The Kernel I/O Subsystem manages the I/O requests made by the user applications and
translates them into hardware commands that the devices can understand. In this article, we will
discuss the importance of the Kernel I/O Subsystem and its advantages and disadvantages.

Importance of Kernel I/O Subsystem


The Kernel I/O Subsystem is an essential part of any modern Operating System. It provides a
unified and consistent interface to the I/O devices, which enables the user applications to access
them without knowing the details of the underlying hardware. The Kernel I/O Subsystem also
manages the concurrency and synchronization issues that arise when multiple applications try to
access the same device simultaneously.
Advantages of Kernel I/O Subsystem
● Device Independence: The Kernel I/O Subsystem provides device independence to the user
applications. It abstracts the hardware details and provides a unified interface to the devices.
This means that the application developers can write code that is independent of the hardware
platform, and the Kernel I/O Subsystem takes care of the hardware-specific details.

Operating System (BCS401) Page 142


● Efficient Resource Management: The Kernel I/O Subsystem provides efficient resource
management for the I/O devices. It manages the I/O requests and schedules them in a way
that optimizes the usage of the available resources. This ensures that the I/O devices are not
over utilized, and the system remains responsive.
● Concurrency Management: The Kernel I/O Subsystem manages the concurrency issues
that arise when multiple applications try to access the same device simultaneously. It ensures
that the applications get exclusive access to the device when needed and allows multiple
applications to share the device when appropriate.
Disadvantages of Kernel I/O Subsystem
● Complex Implementation: The Kernel I/O Subsystem is a complex software component
that requires a lot of resources to implement and maintain. Any issues with the Kernel I/O
Subsystem can affect the performance and stability of the entire system.
● Security Risks: The Kernel I/O Subsystem can pose security risks to the system if not
implemented correctly. Attackers can exploit vulnerabilities in the Kernel I/O Subsystem to
gain unauthorized access to the system or cause a denial-of-service attack.
Functions and services offered by the Kernel:
1. Process management: Save context of the interrupted program, dispatch a process,
manipulate scheduling lists.
2. Process communication: Send and receive interposes messages.
3. Memory management: Set memory protection information, swap-in/ swap-out, handle page
fault.
4. I/O management: Initiate I/O, process I/O completion interrupt, recover from I/O errors.
5. File management: Open a file, read/ write data.
6. Security and protection: Add authentication information for a new user, maintain
information for file protection.

7. Network management: Send/ receive data through a message.

Operating System (BCS401) Page 143


LECTURE 45
I/O Buffering
I/O Buffering and its Various Techniques

A buffer is a memory area that stores data being transferred between two devices or between a
device and an application.
Uses of I/O Buffering:
● Buffering is done to deal effectively with a speed mismatch between the producer and
consumer of the data stream.
● A buffer is produced in main memory to heap up the bytes received from modem.
● After receiving the data in the buffer, the data gets transferred to the disk from the buffer in a
single operation.
● This process of data transfer is not instantaneous, therefore the modem needs another buffer
in order to store additional incoming data.
● When the first buffer gets filled, then it is requested to transfer the data to disk.
● The modem then starts filling the additional incoming data in the second buffer while the
data in the first buffer gets transferred to disk.
● When both the buffers complete their tasks, then the modem switches back to the first buffer
while the data from the second buffer gets transferred to the disk.
● The use of two buffers disintegrates the producer and the consumer of the data, thus
minimizing the time requirements between them.
● Buffering also provides variations for devices that have different data transfer sizes.
Types of various I/O buffering techniques :
1. Single buffer: A buffer is provided by the Operating system to the system portion of the main
memory.
Block oriented device –
● System buffer takes the input.
● After taking the input, the block gets transferred to the user space by the process and then
the process requests for another block.
● Two blocks works simultaneously, when one block of data is processed by the user
process, the next block is being read in.
● OS can swap the processes.
● OS can record the data of system buffer to user processes.

Operating System (BCS401) Page 144


Stream oriented device –
● Line- at a time operation is used for scroll made terminals. User inputs one line at a time,
with a carriage return signaling at the end of a line.
● Byte-at a time operation is used on forms mode, terminals when each keystroke is
significant.

Fig:5.3(Stream oriented device)


2. Double buffer:
Block oriented –
● There are two buffers in the system.
● One buffer is used by the driver or controller to store data while waiting for it to be taken
by a higher level of the hierarchy.
● Another buffer is used to store data from the lower level module.
● Double buffering is also known as buffer swapping.
● A major disadvantage of double buffering is that the complexity of the process increases.
● If the process performs rapid bursts of I/O, then using double buffering may be deficient.

Stream oriented –
● Line- at a time I/O, the user process need not be suspended for input or output, unless the
process runs ahead of the double buffer.
● Byte- at a time operations, a double buffer offers no advantage over a single buffer of
twice the length.

3. Circular buffer:

Operating System (BCS401) Page 145


● When more than two buffers are used, the collection of buffers is itself referred to as a
circular buffer.
● In this, the data is not directly passed from the producer to the consumer because the data
would change due to overwriting of buffers before they had been consumed.
● The producer can only fill up to buffer i-1 while data in buffer is waiting to be consumed.

Fig:5.4(Circuler I/O buffer)

LECTURE 46
Disk Storage and Disk Scheduling
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O Scheduling.
Importance of Disk Scheduling in Operating System
● Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus other I/O requests need to wait in the waiting
queue and need to be scheduled.
● Two or more requests may be far from each other so this can result in greater disk arm
movement.
● Hard drives are one of the slowest parts of the computer system and thus need to be accessed
in an efficient manner.
● Disk Scheduling Algorithms
● FCFS (First Come First Serve)
● SSTF (Shortest Seek Time First)
● SCAN (Elevator Algorithm)
● C-SCAN (Circular SCAN)
● LOOK

Operating System (BCS401) Page 146


● C-LOOK
Key Terms Associated with Disk Scheduling

● Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the
data is to be read or written. So the disk scheduling algorithm that gives a minimum average
seek time is better.
● Rotational Latency: Rotational Latency is the time taken by the desired sector of the disk to
rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
● Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and the number of bytes to be transferred.
● Disk Access Time:

Disk Access Time = Seek Time + Rotational Latency + Transfer Time


Total Seek Time = Total head Movement * Seek Time

Fig:5.5 (Disk Access Time and Disk Response Time)

● Disk Response Time: Response Time is the average time spent by a request waiting to
perform its I/O operation. The average Response time is the response time of all
requests. Variance Response Time is the measure of how individual requests are serviced
with respect to average response time. So the disk scheduling algorithm that gives minimum
variance response time is better.

Disk Scheduling Algorithms


There are several Disk Several Algorithms. We will discuss each one of them.
FCFS (First Come First Serve)

FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are addressed in
the order they arrive in the disk queue. Let us understand this with the help of an example.

Operating System (BCS401) Page 147


LECTURE 47
First Come First Serve
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50
So, total overhead movement (total distance covered by the disk arm) =
(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642

Advantages of FCFS
Here are some of the advantages of First Come First Serve.
● Every request gets a fair chance
● No indefinite postponement

Disadvantages of FCFS
Here are some of the disadvantages of First Come First Serve.
● Does not try to optimize seek time
● May not provide the best possible service

SSTF (Shortest Seek Time First)


In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first. So,
the seek time of every request is calculated in advance in the queue and then they are scheduled
according to their calculated seek time. As a result, the request near the disk arm will get
executed first. SSTF is certainly an improvement over FCFS as it decreases the average response

Operating System (BCS401) Page 148


time and increases the throughput of the system. Let us understand this with the help of an
example.
Example:

Shortest Seek Time First

Suppose the order of request is- (82,170,43,140,24,16,190)


And current position of Read/Write head is: 50
So,
Total overhead movement (total distance covered by the disk arm) =
(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170) =208
Advantages of Shortest Seek Time First
Here are some of the advantages of Shortest Seek Time First.
● The average Response Time decreases
● Throughput increases
Disadvantages of Shortest Seek Time First
Here are some of the disadvantages of Shortest Seek Time First.
● Overhead to calculate seek time in advance
● Can cause Starvation for a request if it has a higher seek time as compared to incoming
requests
● The high variance of response time as SSTF favors only some requests
SCAN
In the SCAN algorithm the disk arm moves in a particular direction and services the requests
coming in its path and after reaching the end of the disk, it reverses its direction and again
services the request arriving in its path. So, this algorithm works as an elevator and is hence also

Operating System (BCS401) Page 149


known as an elevator algorithm. As a result, the requests at the midrange are serviced more and
those arriving behind the disk arm will have to wait.
Example:

SCAN Algorithm

Suppose the requests to be addressed are- 82, 170, 43, 140, 24, 16, 190. And the Read/Write arm
is at 50, and it is also given that the disk arm should move “towards the larger value”.
Therefore, the total overhead movement (total distance covered by the disk arm) is calculated as
= (199-50) + (199-16) = 332

Advantages of SCAN Algorithm


Here are some of the advantages of the SCAN Algorithm.
● High throughput
● Low variance of response time
● Average response time
Disadvantages of SCAN Algorithm
Here are some of the disadvantages of the SCAN Algorithm.
● Long waiting time for requests for locations just visited by disk arm

LECTURE 48
C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So, the

Operating System (BCS401) Page 150


disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm
hence it is known as C-SCAN (Circular SCAN).

Example:

Circular SCAN

Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at


50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
=(199-50) + (199-0) + (43-0) = 391

Advantages of C-SCAN Algorithm


Here are some of the advantages of C-SCAN.
● Provides more uniform wait time compared to SCAN.
LOOK
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference that
the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in
front of the head and then reverses its direction from there only. Thus it prevents the extra delay
which occurred due to unnecessary traversal to the end of the disk.

Operating System (BCS401) Page 151


Example:

LOOK Algorithm

Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at


50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
= (190-50) + (190-16) = 314

C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the CSCAN
disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the
last request to be serviced in front of the head and then from there goes to the other end’s last
request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the
end of the disk.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm
is at 50, and it is also given that the disk arm should move “towards the larger value”

Operating System (BCS401) Page 152


C-LOOK

So, the total overhead movement (total distance covered by the disk arm) is calculated as
= (190-50) + (190-16) + (43-16) = 341

LECTURE 49
RAID
RAID (Redundant Arrays of Independent Disks)
RAID is a technique that makes use of a combination of multiple disks instead of using a single
disk for increased performance, data redundancy, or both. The term was coined by David
Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987.

Why Data Redundancy?


Data redundancy, although taking up extra space, adds to disk reliability. This means, that in
case of disk failure, if the same data is also backed up onto another disk, we can retrieve the data
and go on with the operation. On the other hand, if the data is spread across multiple disks
without the RAID technique, the loss of a single disk can affect the entire data.

Key Evaluation Points for a RAID System


● Reliability: How many disk faults can the system tolerate?
● Availability: What fraction of the total session time is a system in uptime mode, i.e. how
available is the system for actual use?
● Performance: How good is the response time? How high is the throughput (rate of
processing work)? Note that performance contains a lot of parameters and not just the two.
● Capacity: Given a set of N disks each with B blocks, how much useful capacity is available
to the user?
RAID is very transparent to the underlying system. This means, that to the host system, it
appears as a single big disk presenting itself as a linear array of blocks. This allows older
technologies to be replaced by RAID without making too many changes to the existing code.
Different RAID Levels

1. RAID-0 (Stripping)
2. RAID-1 (Mirroring)

Operating System (BCS401) Page 153


3. RAID-2 (Bit-Level Stripping with Dedicated Parity)
4. RAID-3 (Byte-Level Stripping with Dedicated Parity)
5. RAID-4 (Block-Level Stripping with Dedicated Parity)
6. RAID-5 (Block-Level Stripping with Distributed Parity)
7. RAID-6 (Block-Level Stripping with two Parity Bits)

Fig:5.5 (Raid Controller)

1. RAID-0 (Stripping)
● Blocks are “stripped” across disks.

RAID-0

● In the figure, blocks “0,1,2,3” form a stripe.

Operating System (BCS401) Page 154


● Instead of placing just one block into a disk at a time, we can work with two (or more) blocks
placed into a disk before moving on to the next one.

Raid-0

Evaluation
● Reliability: 0
There is no duplication of data. Hence, a block once lost cannot be recovered.
● Capacity: N*B
The entire space is being used to store data. Since there is no duplication, N disks each
having B blocks are fully utilized.

Advantages
1. It is easy to implement.
2. It utilizes the storage capacity in a better way.
Disadvantages
1. A single drive loss can result in the complete failure of the system.
2. Not a good choice for a critical system.

2. RAID-1 (Mirroring)
● More than one copy of each block is stored in a separate disk. Thus, every block has two (or
more) copies, lying on different disks.

Operating System (BCS401) Page 155


Raid-1

● The above figure shows a RAID-1 system with mirroring level 2.


● RAID 0 was unable to tolerate any disk failure. But RAID 1 is capable of reliability.
Evaluation
Assume a RAID system with mirroring level 2.
● Reliability: 1 to N/2
1 disk failure can be handled for certain because blocks of that disk would have duplicates on
some other disk. If we are lucky enough and disks 0 and 2 fail, then again this can be handled
as the blocks of these disks have duplicates on disks 1 and 3. So, in the best case, N/2 disk
failures can be handled.
● Capacity: N*B/2
only half the space is being used to store data. The other half is just a mirror of the already
stored data.
Advantages
1. It covers complete redundancy.
2. It can increase data security and speed.
Disadvantages
1. It is highly expensive.
2. Storage capacity is less.

3. RAID-2 (Bit-Level Stripping with Dedicated Parity)


● In Raid-2, the error of the data is checked at every bit level. Here, we use Hamming Code
Parity Method to find the error in the data.
● It uses one designated drive to store parity.
● The structure of Raid-2 is very complex as we use two disks in this technique. One word is
used to store bits of each word and another word is used to store error code correction.
● It is not commonly used.

Advantages
1. In case of Error Correction, it uses hamming code.
2. It uses one designated drive to store parity.
Disadvantages
1. It has a complex structure and high cost due to extra drive.
2. It requires an extra drive for error detection.

Operating System (BCS401) Page 156


4. RAID-3 (Byte-Level Stripping with Dedicated Parity)

● It consists of byte-level striping with dedicated parity striping.


● At this level, we store parity information in a disc section and write to a dedicated parity
drive.
● Whenever failure of the drive occurs, it helps in accessing the parity drive, through which we
can reconstruct the data.

Raid-3

● Here Disk 3 contains the Parity bits for Disk 0, Disk 1, and Disk 2. If data loss occurs, we
can construct it with Disk 3.
Advantages
1. Data can be transferred in bulk.
2. Data can be accessed in parallel.
Disadvantages
1. It requires an additional drive for parity.
2. In the case of small-size files, it performs slowly.

Operating System (BCS401) Page 157


5. RAID-4 (Block-Level Stripping with Dedicated Parity)

Instead of duplicating data, this adopts a parity-based approach.

Raid-4

● In the figure, we can observe one column (disk) dedicated to parity.


● Parity is calculated using a simple XOR function. If the data bits are 0,0,0,1 the parity bit is
XOR(0,0,0,1) = 1. If the data bits are 0,1,1,0 the parity bit is XOR(0,1,1,0) = 0. A simple
approach is that an even number of one’s results in parity 0, and an odd number of one’s
results in parity 1.

Raid-4

● Assume that in the above figure, C3 is lost due to some disk failure. Then, we can
recomputed the data bit stored in C3 by looking at the values of all the other columns and the
parity bit. This allows us to recover lost data.

Evaluation
● Reliability: 1
RAID-4 allows recovery of at most 1 disk failure (because of the way parity works). If more
than one disk fails, there is no way to recover the data.

Operating System (BCS401) Page 158


● Capacity: (N-1)*B
One disk in the system is reserved for storing the parity. Hence, (N-1) disks are made
available for data storage, each disk having B blocks.
Advantages
1.It helps in reconstructing the data if at most one data is lost.
Disadvantages
2.It can’t help in reconstructing when more than one data is lost.

6. RAID-5 (Block-Level Stripping with Distributed Parity)


● This is a slight modification of the RAID-4 system where the only difference is that the
parity rotates among the drives.

Raid-5

● In the figure, we can notice how the parity bit “rotates”.


● This was introduced to make the random write performance better.
Evaluation
● Reliability: 1
RAID-5 allows recovery of at most 1 disk failure (because of the way parity works). If more
than one disk fails, there is no way to recover the data. This is identical to RAID-4.
● Capacity: (N-1)*B
Overall, space equivalent to one disk is utilized in storing the parity. Hence, (N-1) disks are
made available for data storage, each disk having B blocks.

Advantages
1. Data can be reconstructed using parity bits.
2. It makes the performance better.

Operating System (BCS401) Page 159


Disadvantages
1. Its technology is complex and extra space is required.
2. If both discs get damaged, data will be lost forever.

7. RAID-6 (Block-Level Stripping with two Parity Bits)


● Raid-6 helps when there is more than one disk failure. A pair of independent parities are
generated and stored on multiple disks at this level. Ideally, you need four disk drives for this
level.
● There are also hybrid RAIDs, which make use of more than one RAID level nested one after
the other, to fulfill specific requirements.

Raid-6

Advantages
1. Very high data Accessibility.
2. Fast read data transactions.
Disadvantages
1. Due to double parity, it has slow write data transactions.
2. Extra space is required.
Advantages of RAID
● Data redundancy: By keeping numerous copies of the data on many disks, RAID can shield
data from disk failures.
● Performance enhancement: RAID can enhance performance by distributing data over
several drives, enabling the simultaneous execution of several read/write operations.
● Scalability: RAID is scalable, therefore by adding more disks to the array, the storage
capacity may be expanded.
● Versatility: RAID is applicable to a wide range of devices, such as workstations, servers,
and personal PCs

Operating System (BCS401) Page 160


Disadvantages of RAID

● Cost: RAID implementation can be costly, particularly for arrays with large capacities.
● Complexity: The setup and management of RAID might be challenging.
● Decreased performance: The parity calculations necessary for some RAID configurations,
including RAID 5 and RAID 6, may result in a decrease in speed.
● Single point of failure: RAID is not a comprehensive backup solution, while offering data
redundancy. The array’s whole contents could be lost if the RAID controller malfunctions.

LECTURE 50
File System in Operating System
A file system is a collection of files and directories used by an operating system to organize the
storage of files and to provide a pathway for users to access those files. A file system is a
software layer that manages files and folders on an electronic storage device, such as a hard disk
or flash memory.
A computer file is defined as a medium used for saving and managing data in the computer
system. The data stored in the computer system is completely in digital format, although there
can be various types of files that help us to store the data.
What is a File System?
A file system is a method an operating system uses to store, organize, and manage files and
directories on a storage device. Some common types of file systems include:
1. FAT (File Allocation Table): An older file system used by older versions of Windows and
other operating systems.
2. NTFS (New Technology File System): A modern file system used by Windows. It supports
features such as file and folder permissions, compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and Unix-based
operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS
devices.

A file is a collection of related information that is recorded on secondary storage. Or file is a


collection of logically related entities. From the user’s perspective, a file is the smallest allotment
of logical secondary storage.

Operating System (BCS401) Page 161


The name of the file is divided into two parts as shown below
f)
● Name
● Extension, separated by a period.
Issues Handled By File System
A free space is created on the hard drive whenever a file is deleted from it. To reallocate them to
other files, many of these spaces may need to be recovered. Choosing where to store the files on
the hard disc is the main issue with files one block may or may not be used to store a file. It may
be kept in the disk’s non-contiguous blocks. We must keep track of all the blocks where the files
are partially located.

Operations on the File


A file is a collection of logically related data that is recorded on the secondary storage in the
form of sequence of operations. The content of the files are defined by its creator who is creating
the file. The various operations which can be implemented on a file such as read, write, open and
close etc. are called file operations. These operations are performed by the user by using the
commands provided by the operating system. Some common operations are as follows:
1. Create operation:
This operation is used to create a file in the file system. It is the most widely used operation
performed on the file system. To create a new file of a particular type the associated application
program calls the file system. This file system allocates space to the file. As the file system
knows the format of directory structure, so entry of this new file is made into the appropriate
directory.
2. Open operation:
This operation is the common operation performed on the file. Once the file is created, it must be
opened before performing the file processing operations. When the user wants to open a file, it
provides a file name to open the particular file in the file system. It tells the operating system to
invoke the open system call and passes the file name to the file system.
3. Write operation:
This operation is used to write the information into a file. A system call write is issued that
specifies the name of the file and the length of the data has to be written to the file. Whenever the
file length is increased by specified value and the file pointer is repositioned after the last byte
written.
4. Read operation:
This operation reads the contents from a file. A Read pointer is maintained by the OS, pointing
to the position up to which the data has been read.
5. Re-position or Seek operation:

Operating System (BCS401) Page 162


The seek system call re-positions the file pointers from the current position to a specific place in
the file i.e. forward or backward depending upon the user's requirement. This operation is
generally performed with those file management systems that support direct access files.

6. Delete operation:

Deleting the file will not only delete all the data stored inside the file it is also used so that disk
space occupied by it is freed. In order to delete the specified file the directory is searched. When
the directory entry is located, all the associated file space and the directory entry is released.

7. Truncate operation:

Truncating is simply deleting the file except deleting attributes. The file is not completely
deleted although the information stored inside the file gets replaced.

8. Close operation:

When the processing of the file is complete, it should be closed so that all the changes made
permanent and all the resources occupied should be released. On closing it deallocates all the
internal descriptors that were created when the file was opened.
9. Append operation:
This operation adds data to the end of the file.
10. Rename operation:
This operation is used to rename the existing file.
FILE ORGANIZATION AND ACCESS MECHANISM
File Order of records Records can be deleted Access mode
organization or replaced?
Sequential Order in which they A record cannot be Sequential only
were written deleted, but its space can
be reused for a same-
length record.
Line-sequential Order in which they No Sequential only
were written
Indexed Collating sequence Yes Sequential, random,
by key field or dynamic
Relative Order of relative Yes Sequential, random,
record numbers or dynamic
File organization and access mode

Operating System (BCS401) Page 163


File Access Methods

Various ways to access files stored in secondary memory


1. Sequential Access

Fig:5.6

Most of the operating systems access the file sequentially. In other words, we can say that most
of the files need to be accessed sequentially by the operating system.
In sequential access, the OS read the file word by word. A pointer is maintained which initially
points to the base address of the file. If the user wants to read first word of the file then the
pointer provides that word to the user and increases its value by 1 word. This process continues
till the end of the file.
Modern word systems do provide the concept of direct access and indexed access but the most
used method is sequential access due to the fact that most of the files such as text files, audio
files, video files, etc need to be sequentially accessed.
2. Direct Access
The Direct Access is mostly required in the case of database systems. In most of the cases, we
need filtered information from the database. The sequential access can be very slow and
inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the record we needed is
stored in 10th block. In that case, the sequential access will not be implemented because it will
traverse all the blocks in order to access the needed record.

Operating System (BCS401) Page 164


Direct access will give the required result despite of the fact that the operating system has to
perform some complex tasks such as determining the desired block number. However, that is
generally implemented in database applications.

Fig:5.7
3. Indexed Access

If a file can be sorted on any of the filed then an index can be assigned to a group of certain
records. However, A particular record can be accessed by its index. The index is nothing but the
address of a record in the file.
In index accessing, searching in a large database became very quick and easy but we need to
have some extra space in the memory to store the index value.

File Directories
Directory Structure in OS (Operating System)
What is a directory?
Directory can be defined as the listing of the related files on the disk. The directory may store
some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes. The partitions are also called volumes or
mini disks.
Each partition must have at least one directory in which, all the files of the partition can be listed.
A directory entry is maintained for each file in the directory which stores all the information
related to that file.

Operating System (BCS401) Page 165


Fig:5.8
A directory can be viewed as a file which contains the Meta data of the bunch of files.

Every Directory supports a number of common operations on the file:

1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files

LECTURE 51
Single Level Directory

The simplest method is to have one big list of all the files on the disk. The entire system will
contain only one directory which is supposed to mention all the files present in the file system.
The directory contains one entry per each file present on the file system.

Operating System (BCS401) Page 166


Fig:5.9
This type of directories can be used for a simple system.
Advantages
1. Implementation is very simple.
2. If the sizes of the files are very small then the searching becomes faster.
3. File creation, searching, deletion is very simple since we have only one
directory.
Disadvantages
1. We cannot have two files with the same name.
2. The directory may be very big therefore searching for a file may take so
much time.
3. Protection cannot be implemented for multiple users.
4. There are no ways to group same kind of files.
5. Choosing the unique name for every file is a bit complex and limits the
number of files in the system because most of the Operating System limits
the number of characters used to construct the file name.

Two Level Directory


In two level directory systems, we can create a separate directory for each user.
There is one master directory which contains separate directories dedicated to each
user. For each user, there is a different directory present at the second level,
containing group of user's file. The system doesn't let a user to enter in the other
user's directory without permission.

Operating System (BCS401) Page 167


Fig:5.10
Characteristics of two level directory system
1. Each files has a path name as /User-name/directory-name/
2. Different users can have the same file name.
3. Searching becomes more efficient as only one user's list needs to be traversed.
4. The same kind of files cannot be grouped into a single directory for a particular user.

Every Operating System maintains a variable as PWD which contains the present directory name
(present user name) so that the searching can be done appropriately.
Advantages:
● The main advantage is there can be more than two files with same name, and would be very
helpful if there are multiple users.
● A security would be there which would prevent user to access other user’s files.
● Searching of the files becomes very easy in this directory structure.
Disadvantages:
● As there is advantage of security, there is also disadvantage that the user cannot share the file
with the other users.
● Unlike the advantage users can create their own files, users don’t have the ability to create
subdirectories.
● Scalability is not possible because one use can’t group the same types of files together.
Tree Structured Directory

Operating System (BCS401) Page 168


Fig:5.10
In Tree structured directory system, any directory entry can either be a file or sub directory. Tree
structured directory system overcomes the drawbacks of two level directory system. The similar
kind of files can now be grouped in one directory.
Each user has its own directory and it cannot enter in the other user's directory. However, the
user has the permission to read the root's data but he cannot write or modify this. Only
administrator of the system has the complete access of root directory.
Searching is more efficient in this directory structure. The concept of current working directory
is used. A file can be accessed by two types of path, either relative or absolute.
Absolute path is the path of the file with respect to the root directory of the system while relative
path is the path with respect to the current working directory of the system. In tree structured
directory systems, the user is given the privilege to create the files as well as directories.

Advantages:
● This directory structure allows subdirectories inside a directory.
● The searching is easier.
● File sorting of important and unimportant becomes easier.
● This directory is more scalable than the other two directory structures explained.

Operating System (BCS401) Page 169


Disadvantages:
● As the user isn’t allowed to access other user’s directory, this prevents the file sharing among
users.
● As the user has the capability to make subdirectories, if the number of subdirectories increase
the searching may become complicated.
● Users cannot modify the root directory data.
● If files do not fit in one, they might have to be fit into other directories.
Acyclic-Graph Structured Directories
The tree structured directory system doesn't allow the same file to exist in multiple directories
therefore sharing is major concern in tree structured directory system. We can provide sharing by
making the directory an acyclic graph. In this system, two or more directory entry can point to
the same file or sub directory. That file or sub directory is shared between the two directory
entries.
These kinds of directory graphs can be made using links or aliases. We can have multiple paths
for a same file. Links can either be symbolic (logical) or hard link (physical).
If a file gets deleted in acyclic graph structured directory system, then
1. In the case of soft link, the file just gets deleted and we are left with a dangling pointer.
2. In the case of hard link, the actual file will be deleted only if all the references to it gets
deleted.

Fig:5.11
Advantages:

● Sharing of files and directories is allowed between multiple users.


● Searching becomes too easy.

Operating System (BCS401) Page 170


● Flexibility is increased as file sharing and editing access is there for multiple users.

Disadvantages:

● Because of the complex structure it has, it is difficult to implement this directory structure.
● The user must be very cautious to edit or even deletion of file as the file is accessed by
multiple users.
● If we need to delete the file, then we need to delete all the references of the file in order to
delete it permanently.
LECTURE 52
File Sharing in OS
File sharing in an Operating System (OS) denotes how information and files are shared between
different users, computers, or devices on a network; and files are units of data that are stored in a
computer in the form of documents/images/videos or any others types of information needed.

For Example: Suppose letting your computer talk to another computer and exchange pictures,
documents, or any useful data. This is generally useful when one wants to work on a project with
others, send files to friends, or simply shift stuff to another device. Our OS provides ways to do
this like email attachments, cloud services, etc. to make the sharing process easier and more
secure. Now, file sharing is nothing like a magical bridge between Computer A to Computer B
allowing them to swap some files with each other.

Primary Terminology Related to File Sharing

● Folder/Directory: It is basically like a container for all of our files on a computer. The folder
can contain files and even other folders maintaining like hierarchical structure for organizing
data.
● Networking: It is involved in connecting computers or devices where we need to share the
resources. Networks can be local (LAN) or global (Internet).
● IP Address: It is numerical data given to every connected device on the network
● Protocol: It is given as the set of rules which drives the communication between devices on a
network. In the context of file sharing, protocols define how files are transferred between
computers. File Transfer Protocol (FTP): FTP is a standard network protocol used to transfer
files between a client and a server on a computer network.

Operating System (BCS401) Page 171


Various Ways to Achieve File Sharing.
1. Server Message Block (SMB)
SMB is like a network based file sharing protocol mainly used in windows operating systems. It
allows our computer to share files/printer on a network. SMB is now the standard way for
seamless file transfer method and printer sharing.
Example: Imagine in a company where the employees have to share the files on a particular
project. Here SMB is employed to share files among all the windows based operating system.
orate on projects. SMB/CIFS is employed to share files between Windows-based computers.
Users can access shared folders on a server, create, modify, and delete files.

SMB File Sharing

Fig:5.13(SMB file sharing)


2. Network File System (NFS)
NFS is a distributed based file sharing protocol mainly used in Linux/Unix based operating
System. It allows a computer to share files over a network as if they were based on local. It
provides an efficient way of transfer of files between servers and clients.
Example: Many Programmer/Universities/Research Institution uses Unix/Linux based Operating
System. The Institutes puts up a global server datasets using NFS. The Researchers and students
can access these shared directories and everyone can collaborate on it.

Operating System (BCS401) Page 172


NFS File Sharing

Fig:5.14(NFS file sharing)


3. File Transfer Protocol (FTP)
It is the most common standard protocol for transferring of the files between a client and a server
on a computer network. FTPs supports both uploading and downloading of the files, here we can
download, upload and transfer of files from Computer A to Computer B over the internet or
between computer systems.
Example: Suppose the developer makes changes on the server. Using the FTP protocol, the
developer connects to the server they can update the server with new website content and
updates the existing file over there.

Fig:5.15 (FTP File Sharing)

Operating System (BCS401) Page 173


4. Cloud-Based File Sharing
It involves the famous ways of using online services like Google Drive, Drop Box , One Drive
,etc. Any user can store files over these cloud services and they can share that with others, and
providing access from many users. It includes collaboration in real time file sharing and version
control access.
Ex: Several students working on a project and they can use Google Drive to store and share for
that purpose. They can access the files from any computer or mobile device and they can make
changes in real time and track the changes over there.

Fig:5.16(Cloud Based File Sharing)

These all file sharing methods serves different purpose and needs according to the requirements
and flexibility of the users based on the operating system.

File System Implementation in Operating System

A file is a collection of related information. The file system resides on secondary storage and
provides efficient and convenient access to the disk by allowing data to be stored, located, and
retrieved.
File system implementation in an operating system refers to how the file system manages the
storage and retrieval of data on a physical storage device such as a hard drive, solid-state drive,
or flash drive. The file system implementation includes several components, including:
1. File System Structure: The file system structure refers to how the files and directories are
organized and stored on the physical storage device. This includes the layout of file systems
data structures such as the directory structure, file allocation table, and in odes.
2. File Allocation: The file allocation mechanism determines how files are allocated on the
storage device. This can include allocation techniques such as contiguous allocation, linked
allocation, indexed allocation, or a combination of these techniques.

Operating System (BCS401) Page 174


3. Data Retrieval: The file system implementation determines how the data is read from and
written to the physical storage device. This includes strategies such as buffering and caching
to optimize file I/O performance.
4. Security and Permissions: The file system implementation includes features for managing
file security and permissions. This includes access control lists (ACLs), file permissions, and
ownership management.
5. Recovery and Fault Tolerance: The file system implementation includes features for
recovering from system failures and maintaining data integrity. This includes techniques such
as journaling and file system snapshots.
File system implementation is a critical aspect of an operating system as it directly impacts the
performance, reliability, and security of the system. Different operating systems use different file
system implementations based on the specific needs of the system and the intended use cases.
Some common file systems used in operating systems include NTFS and FAT in Windows, and
ext4 and XFS in Linux.
The file system is organized into many layers:

Fig:5.17
1. I/O Control level – Device drivers act as an interface between devices and OS, they help to
transfer data between disk and main memory. It takes block number as input and as output, it
gives low-level hardware-specific instruction.
2. Basic file system – It Issues general commands to the device driver to read and write physical
blocks on disk. It manages the memory buffers and caches. A block in the buffer can hold the
contents of the disk block and the cache stores frequently used file system metadata.

Operating System (BCS401) Page 175


3. File organization Module – It has information about files, the location of files and their
logical and physical blocks. Physical blocks do not match with logical numbers of logical
blocks numbered from 0 to N. It also has a free space that tracks unallocated blocks.
4. Logical file system – It manages metadata information about a file i.e includes all details
about a file except the actual contents of the file. It also maintains via file control blocks. File
control block (FCB) has information about a file – owner, size, permissions, and location of
file contents.
Advantages:
1. Duplication of code is minimized.
2. Each file system can have its own logical file system.
3. File system implementation in an operating system provides several advantages, including:
4. Efficient Data Storage: File system implementation ensures efficient data storage on a
physical storage device. It provides a structured way of organizing files and directories,
which makes it easy to find and access files.
5. Data Security: File system implementation includes features for managing file security and
permissions. This ensures that sensitive data is protected from unauthorized access.
6. Data Recovery: The file system implementation includes features for recovering from system
failures and maintaining data integrity. This helps to prevent data loss and ensures that data
can be recovered in the event of a system failure.
7. Improved Performance: File system implementation includes techniques such as buffering
and caching to optimize file I/O performance. This results in faster access to data and
improved overall system performance.
8. Scalability: File system implementation can be designed to be scalable, making it possible to
store and retrieve large amounts of data efficiently.
9. Flexibility: Different file system implementations can be designed to meet specific needs and
use cases. This allows developers to choose the best file system implementation for their
specific requirements.
10. Cross-Platform Compatibility: Many file system implementations are cross-platform
compatible, which means they can be used on different operating systems. This makes it easy
to transfer files between different systems.
In summary, file system implementation in an operating system provides several advantages,
including efficient data storage, data security, data recovery, improved performance, scalability,
flexibility, and cross-platform compatibility. These advantages make file system implementation
a critical aspect of any operating system.

Operating System (BCS401) Page 176


Disadvantages

If we access many files at the same time then it results in low performance. We can implement a
file system by using two types of data structures:
1. Boot Control Block – It is usually the first block of volume and it contains information
needed to boot an operating system. In UNIX it is called the boot block and in NTFS it is
called the partition boot sector.
2. Volume Control Block – It has information about a particular partition ex:- free block count,
block size and block pointers, etc. In UNIX it is called superblock and in NTFS it is stored in
the master file table.
3. Directory Structure – They store file names and associated inode numbers. In UNIX,
includes file names and associated file names and in NTFS, it is stored in the master file
table.
4. Per-File FCB – It contains details about files and it has a unique identifier number to allow
association with the directory entry. In NTFS it is stored in the master file table.
5. Mount Table – It contains information about each mounted volume.
6. Directory-Structure cache – This cache holds the directory information of recently accessed
directories.
7. System-wide open-file table – It contains the copy of the FCB of each open file.
8. Per-process open-file table – It contains information opened by that particular process and it
maps with the appropriate system-wide open-file.
9. Linear List – It maintains a linear list of filenames with pointers to the data blocks. It is time-
consuming also. To create a new file, we must first search the directory to be sure that no
existing file has the same name then we add a file at the end of the directory. To delete a file,
we search the directory for the named file and release the space. To reuse the directory entry
either we can mark the entry as unused or we can attach it to a list of free directories.
10. Hash Table – The hash table takes a value computed from the file name and returns a pointer
to the file. It decreases the directory search time. The insertion and deletion process of files is
easy. The major difficulty is hash tables are its generally fixed size and hash tables are
dependent on the hash function of that size.

Implementation Issues

Management of disc space: To prevent space wastage and to guarantee that files can always be
stored in contiguous blocks, file systems must manage disc space effectively. Free space
management, fragmentation prevention, and garbage collection are methods for managing disc
space.

Operating System (BCS401) Page 177


Checking for consistency and repairing errors: The consistency and error-free operation of files
and directories must be guaranteed by file systems. Journaling, check summing, and redundancy
are methods for consistency checking and error recovery. File systems may need to perform
recovery operations if errors happen in order to restore lost or damaged data.
Locking files and managing concurrency: To prevent conflicts and guarantee data integrity, file
systems must control how many processes or users can access a file at once. File locking,
semaphore, and other concurrency-controlling methods are available.
Performance optimization: File systems need to optimize performance by reducing file access
times, increasing throughput, and minimizing system overhead. Caching, buffering, prefetching,
and parallel processing are methods for improving performance.

Key Steps Involved In File System Implementation

File system implementation is a crucial component of an operating system, as it provides an


interface between the user and the physical storage device. Here are the key steps involved in file
system implementation:
1. Partitioning the storage device: The first step in file system implementation is to partition the
physical storage device into one or more logical partitions. Each partition is formatted with a
specific file system that defines the way files and directories are organized and stored.
2. File system structures: File system structures are the data structures used by the operating
system to manage files and directories. Some of the key file system structures include the
superblock, inode table, directory structure, and file allocation table.
3. Allocation of storage space: The file system must allocate storage space for each file and
directory on the storage device. There are several methods for allocating storage space,
including contiguous, linked, and indexed allocation.
4. File operations: The file system provides a set of operations that can be performed on files
and directories, including create, delete, read, write, open, close, and seek. These operations
are implemented using the file system structures and the storage allocation methods.
5. File system security: The file system must provide security mechanisms to protect files and
directories from unauthorized access or modification. This can be done by setting file
permissions, access control lists, or encryption.
6. File system maintenance: The file system must be maintained to ensure efficient and reliable
operation. This includes tasks such as disk defragmentation, disk checking, and backup and
recovery.

Operating System (BCS401) Page 178


File system protection and security
Introduction

File protection in an operating system is the process of securing files from unauthorized access,
alteration, or deletion. It is critical for data security and ensures that sensitive information
remains confidential and secure. Operating systems provide various mechanisms and techniques
such as file permissions, encryption, access control lists, auditing, and physical file security to
protect files. Proper file protection involves user authentication, authorization, access control,
encryption, and auditing. Ongoing updates and patches are also necessary to prevent security
breaches. File protection in an operating system is essential to maintain data security and
minimize the risk of data breaches and other security incidents.

What is File protection?

File protection in an operating system refers to the various mechanisms and techniques used to
secure files from unauthorized access, alteration, or deletion. It involves controlling access to
files, ensuring their security and confidentiality, and preventing data breaches and other security
incidents. Operating systems provide several file protection features, including file permissions,
encryption, access control lists, auditing, and physical file security. These measures allow
administrators to manage access to files, determine who can access them, what actions can be
performed on them, and how they are stored and backed up. Proper file protection requires
ongoing updates and patches to fix vulnerabilities and prevent security breaches. It is crucial for
data security in the digital age where cyber threats are prevalent. By implementing file protection
measures, organizations can safeguard their files, maintain data confidentiality, and minimize the
risk of data breaches and other security incidents.

Type of File protection

File protection is an essential component of modern operating systems, ensuring that files are
secured from unauthorized access, alteration, or deletion. In this context, there are several types
of file protection mechanisms used in operating systems to provide robust data security.

● File Permissions − File permissions are a basic form of file protection that controls access
to files by setting permissions for users and groups. File permissions allow the system
administrator to assign specific access rights to users and groups, which can include read,
write, and execute privileges. These access rights can be assigned at the file or directory
level, allowing users and groups to access specific files or directories as needed. File
permissions can be modified by the system administrator at any time to adjust access
privileges, which helps to prevent unauthorized access.
● Encryption − Encryption is the process of converting plain text into ciphertext to protect
files from unauthorized access. Encrypted files can only be accessed by authorized users
who have the correct encryption key to decrypt them. Encryption is widely used to secure

Operating System (BCS401) Page 179


sensitive data such as financial information, personal data, and other confidential
information. In an operating system, encryption can be applied to individual files or
entire directories, providing an extra layer of protection against unauthorized access.
● Access Control Lists (ACLs) − Access control lists (ACLs) are lists of permissions
attached to files and directories that define which users or groups have access to them and
what actions they can perform on them. ACLs can be more granular than file
permissions, allowing the system administrator to specify exactly which users or groups
can access specific files or directories. ACLs can also be used to grant or deny specific
permissions, such as read, write, or execute privileges, to individual users or groups.
● Auditing and Logging − Auditing and logging are mechanisms used to track and monitor
file access, changes, and deletions. It involves creating a record of all file access and
changes, including who accessed the file, what actions were performed, and when they
were performed. Auditing and logging can help to detect and prevent unauthorized access
and can also provide an audit trail for compliance purposes.
● Physical File Security − Physical file security involves protecting files from physical
damage or theft. It includes measures such as file storage and access control, backup and
recovery, and physical security best practices. Physical file security is essential for
ensuring the integrity and availability of critical data, as well as compliance with
regulatory requirements.

Overall, these types of file protection mechanisms are essential for ensuring data security and
minimizing the risk of data breaches and other security incidents in an operating system. The
choice of file protection mechanisms will depend on the specific requirements of the
organization, as well as the sensitivity and volume of the data being protected. However, a
combination of these file protection mechanisms can provide comprehensive protection against
various types of threats and vulnerabilities.

Advantages of File protection

File protection is an important aspect of modern operating systems that ensures data security and
integrity by preventing unauthorized access, alteration, or deletion of files. There are several
advantages of file protection mechanisms in an operating system, including −

● Data Security − File protection mechanisms such as encryption, access control lists, and
file permissions provide robust data security by preventing unauthorized access to files.
These mechanisms ensure that only authorized users can access files, which helps to
prevent data breaches and other security incidents. Data security is critical for
organizations that handle sensitive data such as personal data, financial information, and
intellectual property.
● Compliance − File protection mechanisms are essential for compliance with regulatory
requirements such as GDPR, HIPAA, and PCI-DSS. These regulations require

Operating System (BCS401) Page 180


organizations to implement appropriate security measures to protect sensitive data from
unauthorized access, alteration, or deletion. Failure to comply with these regulations can
result in significant financial penalties and reputational damage.
● Business Continuity − File protection mechanisms are essential for ensuring business
continuity by preventing data loss due to accidental or malicious deletion, corruption, or
other types of damage. File protection mechanisms such as backup and recovery,
auditing, and logging can help to recover data quickly in the event of a data loss incident,
ensuring that business operations can resume as quickly as possible.
● Increased Productivity − File protection mechanisms can help to increase productivity by
ensuring that files are available to authorized users when they need them. By preventing
unauthorized access, alteration, or deletion of files, file protection mechanisms help to
minimize the risk of downtime and data loss incidents that can impact productivity.
● Enhanced Collaboration − File protection mechanisms can help to enhance collaboration
by allowing authorized users to access and share files securely. Access control lists, file
permissions, and encryption can help to ensure that files are only accessed by authorized
users, which helps to prevent conflicts and misunderstandings that can arise when
multiple users access the same file.
● Reputation − File protection mechanisms can enhance an organization's reputation by
demonstrating a commitment to data security and compliance. By implementing robust
file protection mechanisms, organizations can build trust with their customers, partners,
and stakeholders, which can have a positive impact on their reputation and bottom line.

Disadvantages of File protection

There are also some potential disadvantages of file protection in an operating system, including −

● Overhead − some file protection mechanisms such as encryption, access control lists, and
auditing can add overhead to system performance. This can impact system resources and
slow down file access and processing times.
● Complexity − File protection mechanisms can be complex and require specialized
knowledge to implement and manage. This can lead to errors and misconfigurations that
compromise data security.
● Compatibility Issues − Some file protection mechanisms may not be compatible with all
types of files or applications, leading to compatibility issues and limitations in file usage.
● Cost − Implementing robust file protection mechanisms can be expensive, especially for
small organizations with limited budgets. This can make it difficult to achieve full data
protection.

Operating System (BCS401) Page 181


● User Frustration − Stringent file protection mechanisms such as complex passwords,
frequent authentication requirements, and restricted access can frustrate users and impact
productivity.

Type of File protection in File System


In computer systems, a lot of user’s information is stored, the objective of the operating system is
to keep safe the data of the user from the improper access to the system. Protection can be
provided in number of ways. For a single laptop system, we might provide protection by locking
the computer in a desk drawer or file cabinet. For multi-user systems, different mechanisms are
used for the protection.
Types of Access:
The files, which have direct access of the any user, have the need of protection. The files, which
are not accessible to other users, does not require any kind of protection. The mechanism of the
protection provide the facility of the controlled access by just limiting the types of access to the
file. Access can be given or not given to any user depends on several factors, one of which is the
type of access required. Several different types of operations can be controlled:
● Read – Reading from a file.
● Write – Writing or rewriting the file.
● Execute – Loading the file and after loading the execution process starts.
● Append – Writing the new information to the already existing file, editing must be end at the
end of the existing file.
● Delete – Deleting the file which is of no use and using its space for the another data.
● List – List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be controlled. There
are many protection mechanisms. Each of them mechanism have different advantages and
disadvantages and must be appropriate for the intended application.
Access Control:
There are different methods used by different users to access any file. The general way of
protection is to associate identity-dependent access with all the files and directories, an list
called access-control list (ACL) which specify the names of the users and the types of access
associate with each of the user. The main problem with the access list is their length. If we want
to allow everyone to read a file, we must list all the users with the read access. This technique
has two undesirable consequences:
Constructing such a list may be a tedious and unrewarding task, especially if we do not know in
advance the list of the users in the system.

Operating System (BCS401) Page 182


Previously, the entry of any directory is of the fixed size but now it changes to the variable size
which results in complicated space management. These problems can be resolved by use of a
condensed version of the access list. To condense the length of the access-control list, many
systems recognize three classification of users in connection with each file:
● Owner – Owner is the user who has created the file.
● Group – A group is a set of members who has similar needs and they are sharing the same
file.
● Universe – In the system, all other users are under the category called universe.
The most common recent approach is to combine access-control lists with the normal general
owner, group, and universe access control scheme. For example: Solaris uses the three categories
of access by default but allows access-control lists to be added to specific files and directories
when more fine-grained access control is desired.

Other Protection Approaches:


The access to any system is also controlled by the password. If the use of password is random
and it is changed often, this may result in limiting the effective access to a file.
The use of passwords has a few disadvantages:
● The number of passwords is very large so it is difficult to remember the large passwords.
● If one password is used for all the files, then once it is discovered, all files are accessible;
protection is on all-or-none basis
Key differences between the Security and Protection in Operating System

There are various head-to-head comparisons between the security and protection in the operating
system. Some comparisons of security and protection are as follows:

Operating System (BCS401) Page 183


Important Questions of UNIT 5

1. Explain the term RAID and its characteristics. Also, explain various RAID levels with
their advantages and disadvantages
2. Explain the concept of file system management. Also, explain various file allocation and
file access mechanisms in details.
3. Suppose the following disk request sequence (track numbers) for a disk with 100 tracks
is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W
head is on track 49. Calculate the net head movement using:
(i) SSTF
(ii) SCAN
(iii) CSCAN
(iv) LOOK
4. Explain the followings: (i) Buffering
(ii) Polling
(iii) Direct Memory Access (DMA)
5. Explain tree level directory structure.Explain various operations associated with a file.
6. Difference between Directory and File. Explain File organization and Access mechanism.
7. What do you mean by caching, spooling and error handling, explain in detail. Explain
FCFS, SCAN & CSCAN scheduling with eg.
8. Discuss the Linked, Contiguous and Index and multilevel Indexing file allocation
schemes. Which allocation scheme will minimize the amount of space required in
directory structure and why?Write short notes on :
i) I/O Buffering
ii) Disk storage and scheduling
9. Define seek time and latency time.
10. Define SCAN and C-SCAN scheduling algorithms. A hard disk having 2000 cylinders,
numbered from 0 to 1999. The drive is currently serving the request at cylinder 143, and
the previous request was at cylinder 125. The status of the queue is as follows
86,1470,913,1774,948,1509,1022,1750,130. What is the total distance (in cylinders) that
the disk arm moves to satisfy the entire pending request for each of the following disk-
scheduling algorithms?(i) SSTF
(ii) FCFS
11. What are files and explain the access methods for files.
12. Write are files and explain the access methods for files.
13. File system protection and security and
(i) Linked File allocation methods

Operating System (BCS401) Page 184


14. Explain the following methods.
15. (i) Bit vector (iii) Linked List
1. (ii) Grouping (iv) Counting
16. What is directory? Explain any two ways to implement the directory.
17. Suppose the moving head disk with 200 tracks is currently serving a request for track 143
and has just finished a request for track 125. If the queue of request is kept in FIFO order
86.147.91, 177.94.150.What is total head movement for the following scheduling
2. (i) FCFS (ii) SSTF (iii) C-SCAN
18. Write short notes on: (i) I/O Buffering
(ii) Sequential File
(iii) Indexed File

Operating System (BCS401) Page 185

You might also like