Os Notes
Os Notes
MEERUT
Course Content
for
Operating System (Sub Code-BCS401)
B.Tech. IIYear
CSE, CS, IT, CS-IT, CSE (AI), CSE (AI&ML), CSE
(DS) and CSE (IOT)
Mission of Institute
The mission of the institute is to educate young aspirants in various technical fields to fulfill global
requirements of human resources by providing sustainable quality education, training and
invigorating environment besides molding them into skilled competent and socially responsible
citizens who will lead the building of a powerful nation.
Vision of Department
To become a globally recognized department where talented frontier the Internet of things (IoT) are
nurtured to meet the need of industry, society, and economy to serve the nation and society.
Mission of Department
To provide resources of excellence and fresh minds into highly competent IoT application
development, and enhance their knowledge and skills through covering technologies and multi-
disciplinary engineering practices.
To equip students and provide the state - of- the art facilities to develop industry-ready IoT systems.
To promote industry collaborations to have the best careers.
Date
Lecture Cox Lecture LECTURE Date Planned
Executed
No.
Introduction : Operating system and
1 CO-1
functions
Classification of Operating systems Batch,
2 CO-1
Interactive, Time sharing, Real Time System
Multiprocessor Systems, Multiuser Systems,
3 CO-1 Multiprocess Systems, Multithreaded
Systems
Operating System services, System
4 CO-1
Components
Operating System Structure- Layered
5 CO-1
structure, Monolithic Kernel
6 CO-1 Microkernel Systems, Reentrant Kernel
Scheduling concepts, Performance criteria,
7 CO-2
Process States, Process Transition Diagram
8 CO-2 Schedulers, Process Control Block (PCB),
Process address space, Process
9 CO-2 identification information, Threads and their
management
10 CO-2 Scheduling: FCFS,SJF.
11 CO-2 PRIORITY Scheduling
12 CO-2 RR,Multilevel queue
Multilevel Feedback Queue,Multiprocessor
13 CO-2
Scheduling,Threads and their management
Lecture-1
The operating system (OS) is one of the programs that run on the hardware and enables the user to communicate with it
by sending input commands and output commands. It allows the user, computer applications, and system hardware to
connect with one another, therefore the operating system acts as a hub. An operating system (OS) is the program that,
after being initially loaded into the computer by a boot program, manages all of the other application programs in a
computer. The application programs make use of the operating system by making requests for services through a defined
application program interface (API) In addition, users can interact directly with the operating system through a user
interface, such as a command - line interface (CLI) or a graphical UI (GUI) Without operating system, a computer and
software must be useless An Operating System can be defined as an interface between user and hardware. It is
responsible for the execution of all the processes, Resource Allocation, CPU management, File Management and many
other tasks.
1.3 Why use an operating system?
An operating system brings powerful benefits to computer software and software development. Without an operating
system, every application would need to include its own UI, as well as the comprehensive code needed to handle all low -
level functionality of the underlying computer, such as disk storage, network interfaces and so on. Considering the vast
array of underlying hardware available, this would vastly bloat the size of every application and make software
development impractical. Many common tasks, such as sending a network packet or displaying text on a standard output
device, such as a display, can be offloaded to system software that serves as an intermediary between the applications and
the hardware. The system software provides a consistent and repeatable way for applications to interact with the hardware
without the applications needing to know any details about the hardware. As long as each application accesses the same
resources and services in the same way, that system software -- the operating system -- can service almost any number of
applications. This vastly reduces the amount of time and coding required developing and debugging an application, while
ensuring that users can control, configure and manage the system hardware through a common and well - understood
interface. Ones to interact with the hardware without the applications needing to know any details about the hardware.
Once installed, the
Operating system relies on a vast library of device drivers to tailor OS services to the specific hardware environment.
Thus, every application may make a common call to a storage device, but the OS receives that call and uses the
corresponding driver to translate the call into actions (commands) needed for the underlying hardware on that specific
computer. the operating system provides a comprehensive platform that identifies, configures and manages a range of
hardware, including processors; memory devices and memory management; chipsets; storage; networking; port
communication, such as Video Graphics Array (VGA), High - Definition Multimedia Interface (HDMI) and Universal
Serial Bus (USB); and subsystem interfaces, such as Peripheral Component Interconnect Express (PCIe).
1.4 What are Characteristics of Operating System:
Virtualization: Operating systems can provide Virtualization capabilities, allowing multiple operating systems or instances
of an operating system to run on a single physical machine. This can improve resource utilization and provide isolation
between different operating systems or applications.
Networking: Operating systems provide networking capabilities, allowing the computer system to connect to other systems
and devices over a network. This can include features such as network protocols network interfaces, and network security
The components of an operating system play a key role to make a variety of computer system parts work together. There
are the following components of an operating system, such as:
• Hardware
• Application Program
• Operating System
• Users
Hardware: Computer hardware is a collective term used to describe any of the physical components of an analog or digital
computer. Computer hardware can be categorized as being either internal or external components. Generally, internal
hardware components are those necessary for the proper functioning of the computer, while external hardware components
are attached to the computer to add or enhance functionality.
Operating System: The operating system (OS) is one of the programs that run on the hardware and enables the user to
communicate with it by sending input commands and output commands. It allows the user, computer applications, and
system hardware to connect with one another, therefore the operating system acts as a hub. Without an operating system, a
computer and software must be useless.
User: Users perform the computation with the help of an application program. A user is someone or something that wants or
needs access to a system's resources; another word for user is client. A user can be a real person sitting on the Windows
operating system, a user refers to a person who has an account on the computer or device. Users can have different levels of
access and permissions depending on their account type. There are several types of users in Windows OS:
Application Program: Applications programs are programs written to solve specific problems, to produce specific reports,
or to update specific files. A computer program that performs useful work on behalf of the user of the computer (for
example a word processing or accounting program) as opposed to the SYSTEM SOFTWARE which manages the running of
the computer itself, or to the DEVELOPMENT software which is used by programmers to create other programs. An
application program is typically self-contained, storing data within files of a special (often proprietary) format that it can
create, open for editing and save to disk.
Booting: The process of starting or restarting the computer is known as booting. If the computer is switched off completely
and if turned on then it is called cold booting. Warm booting is a process of using the operating system to restart the
computer.
Processor Management: In a multiprogramming environment, the OS decides the order in which processes have access
to the processor, and how much processing time each process has. This function of the OS is called Process Scheduling.
An Operating System performs the following activities for Processor Management: Keeps track of the status of processes.
The program which performs this task is known as a traffic controller. Allocates the CPU that is a processor to a process.
De- allocates the processor when a process is no longer required.
Device Management: An OS manages device communication via its respective drivers. It performs the following activities
for device management. Keeps track of all devices connected to the system. Designates a program responsible for every
device known as the Input/output controller. Decides which process gets access to a certain device and for how long.
Allocates devices effectively and efficiently. Deal locates devices when they are no longer required.
Process Management: The process is a program under the execution. The operating system manages all the processes so
that each process gets the CPU for a specific time to execute itself, and there will be less waiting time for each process.
This management is also called process scheduling. For process scheduling operating system uses various algorithms:
FCFS, SJF, LJF, ROUND ROBIN, PRIORITY SCHEDULING ALGORITHM.
File Management: A file system is organized into directories for efficient or easy navigation and usage. These directories
may contain other directories and other files. An Operating System carries out the following file management activities. It
keeps track of where information is stored, user access settings, the status of every file, and more… These facilities are
collectively known as the file system.
User Interface or Command Interpreter:
The user interacts with the computer system through the operating system. Hence the OS acts as an interface
between the user and the computer hardware. This user interface is offered through a set of commands or a
graphical user interface (GUI). Through this interface, the user makes interaction with the applications and
the machine hardware.
Security: The operating system uses password protection to protect user data and similar other techniques. it also prevents
unauthorized access to programs and user data.
Job Accounting: The operating system Keeps track of time and resources used by various tasks and users, this information
can be used to track resource usage for a particular user or group of users.
Error-detecting aids: The operating system constantly monitors the system to detect errors and avoid malfunctioning
computer systems.
Lecture-2
Batch processing was very popular in the 1970s. The jobs were executed in batches. People used to have a single
Computer known as a mainframe. Users using batch operating systems do not interact directly with the computer. Each user
prepares their job using an offline device like a punch card and submitting it to the computer operator. Jobs with similar
requirements are grouped and executed as a group to speed up processing. Once the programmers have left their programs
with the operator, they sort the programs with similar needs into batches. The batch operating system grouped jobs that
perform similar functions. These job groups are treated as a batch and executed simultaneously. A computer system with this
operating system performs the following batch processing activities.
● A job is a single unit that consists of a preset sequence of commands, data, and programs.
● Processing takes place in the order in which they are received, i.e., first come, first serve.
● These jobs are stored in memory and executed without the need for manual
information. When a job is successfully run, the operating system releases its memory.
There are mainly two types of the batch operating system. These are as follows:
Batch operating systems load less stress on the CPU and include minimal user interaction, and that is why you can
still use them nowadays. Another benefit of batch operating systems is that huge repetitive jobs may be done without
interacting with the computer to notify the system that you need to perform after you finish that job.
● Old batch operating systems weren't interactive, which means that the user did not interact with the program while
executing it.
● Modern batch operating systems now support interactions. For example, you may schedule the job, and
when the specified time arrives, the computer acknowledges the processor that the time is up.
How does the Batch Operating System work?
● The operating system keeps the number of jobs in memory and performs them one at a time.
● Jobs are processed in a first-come, first-served manner.
● Each job set is defined as a batch. When a task is finished, its memory is freed, and the work’s
output is transferred into an output spool for later printing or processing.
● User interaction is limited in the batch operating system. When the system takes the task from the user, the
users free.
● You may also use the batch processing system to update data relating to any transactions or records.
There are various characteristics of the Batch Operating System. Some of them are as follows:
● In this case, the CPU executes the jobs in the same sequence that they are sent to it by the operator, which implies
that the task sent to the CPU first will be executed first. It's also known as the 'first come, first serve'.
● The word job refers to the command or instruction that the user and the program should perform.
● A batch operating system runs a set of user-supplied instructions composed of distinct instructions and programs
with several similarities.
● When a task is successfully executed, the OS releases the memory space held by that job.
● The user does not interface directly with the operating system in a batch operating system; rather, all instructions
are sent to the operator.
● The operator evaluates the user's instructions and creates a set of instructions having similar properties.
Advantages
There are various advantages of the Batch Operating System. Some of them are as follows:
● It isn't easy to forecast how long it will take to complete a job; only batch system processors know how long
it will take to finish the job in line.
● This system can easily manage large jobs again and again.
● The batch process can be divided into several stages to increase processing speed.
● When a process is finished, the next job from the job spool is run without any user interaction.
● CPU utilization gets improved.
Disadvantages
There are various disadvantages of the Batch Operating System. Some of them are as follows:
● When a job fails once, it must be scheduled to be completed, and it may take a long time to complete the task.
● Computer operators must have full knowledge of batch systems.
● The batch system is quite difficult to debug.
● The computer system and the user have no direct interaction.
● If a job enters an infinite loop, other jobs must wait for an unknown period of
time.
Uniprogramming Operating System
Uniprogramming implies that only a single task or program is in the main memory at a particular time. It was more common in
the initial computers and mobiles where one can run only a single application at time.
Disadvantages of uni-programming:
● Wastage of CPU time.
● No user interaction.
● No mechanism to prioritize processes.
Multiprogramming OS is an ability of an operating system that executes more than one program using a single processor
machine .More than one task or program or jobs are present inside the main memory at one point of time. Buffering and
spooling can overlap I/O and CPU tasks to improve the system performance but it has some limitations that a single user
cannot always keep CPU or I/O busy all the time. To increase resource utilization, multiprogramming approaches. The OS
could pick and start the execution of one of the jobs in memory, whenever the jobs does not need CPU that means the job is
working with I/O at that time the CPU is idle at that time the OS switches to another job in memory and CPU executes a
portion of it till the job issues a request for I/O and so on. Let’s P1 and P2 are two programs present in the main memory.
The OS picks one program and starts executing it. During execution if the P1 program requires I/O operation, then the OS
will simply switch over to the P2 program. If the p2 program requires I/O then again it switches to P3 and so on. If there is
no other program remaining after P3 then the CPU will pass its control back to the previous program.
Features of Multiprogramming
Disadvantages
● CPU scheduling is compulsory because lots of jobs are ready to run on CPU simultaneously.
● User is not able to interact with jobs when it is executing.
● Programmers also cannot modify a program that is being executed.
● If several jobs are ready in main memory and if there is not enough space for all of them, then
the system has to choose them by making a decision, this processes called job scheduling.
● When the operating system selects a job from the group of jobs and loads that job into memory for execution,
therefore it needs memory management, if several such jobs are ready then it needs CPU scheduling.
Advantages of Multitasking:
Background Processing
A multitasking operating system provides a better environment for background processes to run. These background
programs are not visible to most users, but they help other programs like firewalls, antivirus software, and others run well.
Disadvantages of Multitasking:
Processor Boundation
The system may run programs slowly because of the poor speed of their processors, and their reaction time might rise when
processing many programs. To solve this problem, more processing power is required.
Memory Boundation
The computer's performance may get slow due to the multiple programs running at the same time because the main memory
gets overloaded while loading multiple programs. Because the CPU is unable to provide different times for each program,
reaction time increases. The primary cause of this issue is that it makes use of low-capacity RAM. As a result, the RAM
capacity can be raised to provide a solution.
CPU Heat Up
The multiple processors are busier at the same time to complete any task in a multitasking environment, so the CPU
generates more heat.
Memory: The physical memory present inside the system is where storage occurs. It is also known as Random Access
Memory (RAM). The system may rectify the data that is present in the main memory. So, every executed program should
be copied from physical storage like a hard disk. Main memory is determined as an important part of the OS because it
specifies how many programs may be executed simultaneously.
Kernel: A multi-user operating system makes use of the Kernel component, which is built in a low-level language. This
component is embedded in the computer system's main memory and may interact directly with the system's H/W.
Processor: The CPU (Central Processing Unit) of the computer is sometimes known as the computer’s brain. In large
machines, the CPU would necessitate more ICS. On smaller computers, the CPU is mapped in a single chip known as a
microprocessor.
User Interface: The user interface is the way of interaction between users and all software and hardware processes. It
enables the users to interact with the computer system in a simple manner.
Device Handler: Each input and output device needs its device handler. The device handler's primary goal is to provide all
requests from the whole device request queue pool. The device handler operates in continuous cycle mode, first discarding
the I/O request block from the queue side.
Spooler: Spooler stands for 'Simultaneous Peripheral Output on Line'. The Spooler runs all computer processes and outputs
the results at the same time. Spooling is used by a variety of output devices, including printers.
Time-Sliced Systems: It's a system in which each user's job gets a specific amount of CPU time. In other words, each work
is assigned to a specific time period. These time slices look too small to the user's eyes. An internal component known as the
'Scheduler' decides to run the next job. This scheduler determines and executes the job that must perform based on the
priority cycle. It is a system where each user task is assigned a short period of CPU time. The CPU time gets divided into
time slices where each slice is too small for the user. This method of dividing the CPU time is known as time slicing. Time
Slicing is a scheduling algorithm also called Round Robin Scheduling. It gives equal opportunity to all the processes
running in the system to use CPU time.
Multiprocessor System: Multiple processors are used in this system, which helps to improve overall performance. If one of
the processors in this system fails, the other processor is responsible for completing its assigned task. Multiprocessor systems
are systems that use multiple processors at the same time. Using multiple processors increases the system performance as all
the processors run side by side. It works at a pace that is faster than the single-processor operating system. In a
multiprocessor system, if one processor fails, another processor completes its assigned tasks.
Virus: In the multi-user operating system, if a virus gets into a single network of computers, it will pave the way for the virus to
affect all the computers in the network.
Visibility of data: Privacy of data and information becomes a concern as all the information in the computers gets shared in
public.
Multiple accounts: Multiple accounts on a single computer may not be suitable for all users. Thus, it is better to have multiple
PCs for each user.
For Example: UNIX Operating system is one of the most widely used multiprocessing systems.
To employ a multiprocessing operating system effectively, the computer system must have the following things:
● Failure of one processor does not affect the functioning of other processors.
● It divides all the workload equally to the available processors.
● Make use of available resources efficiently.
Disadvantages
Advantages
Disadvantages
Basic Each CPU executes the The Master processor only carries out the
OS operations. OS functions.
Ease Symmetric Multiprocessors are difficult to The master processor has access to the
understand since all of the processors data structure.
must be synchronized to maintain load
balance.
Processor All processors use a common ready The master processor assigns the
queue, or each may have its private slave processors processes, or they
ready queue. have some predefined tasks.
Failure When a CPU Fails, the If a master processor fails, control is passed
system's computing capacity to a slave processor. If a slave processor
decreases. fails, the task is passed to different
processor.
System:
(RTOS) are used in environments where a large number of events, mostly external to the computer system, must be accepted
and
Processed in a short time or within certain deadlines. such applications are industrial control, telephone switching equipment,
flight control, and real-time simulations. With an RTOS, the processing time is measured in tenths of seconds. This system is
time-bound and has a fixed deadline. The processing in this type of system must occur within the specified constraints.
Otherwise, this will lead to system failure.
Examples of the real-time operating systems:
Firm Real-time Operating System: RTOS of this type have to follow deadlines as well. In spite of its small impact,
missing a deadline can have unintended consequences, including a reduction in the quality of the product. Example:
Multimedia applications.
Advantages:
Maximum consumption –Maximum utilization of devices and systems. Thus more output from all the resources. Task
Shifting –Time assigned for shifting tasks in these systems is very less. For example, in
Disadvantages:
Limited Tasks –Very few tasks run simultaneously, and their concentration is very less on few applications to avoid
errors.
Use Heavy System Resources –Sometimes the system resources are not so good and they are expensive as well.
Complex Algorithms –The algorithms are very complex and difficult for the designer to write on.
Device Driver and Interrupt signals –It needs specific device drivers and interrupts signals to respond earliest to
interrupts.
Thread Priority –It is not good to set thread priority as these systems are very less prone to switching tasks.
Minimum Switching – RTOS performs minimal task switching.
Batch Processing: It is defined as the process of gathering programs and data together in a batch before performing them. The
job of the operating system is to define the jobs as a single unit by using some already defined sequence of commands or data,
etc.Before they are performed or carried out, these are stored in the memory of the system and their processing depends on a
FIFO basis. The operating system releases the memory and then copies the output into an output spool for later printing when the
job is finished. Its use is that it basically improves the system performance because a new job begins only when the old one is
completed without any interference from the user. One disadvantage is that there is a small chance that the jobs will enter an
infinite loop. Debugging is also somewhat difficult with batch processing.
Multitasking: The CPU can execute many tasks simultaneously by switching between them. This is known as Time- Sharing
System and also it has a very fast response time. They switch so fastly that the users can very easily interact with each running
program.
Multiprogramming: Multiprogramming happens when the memory of the system stores way too many processes. The job of the
operating system here is to run these processes in parallel on the same processor. Multiple processes share the CPU, thus
increasing CPU utilization. Now, the CPU only performs one job at a particular time while the rest wait for the processor to be
assigned to them. The operating system takes care of the fact that the CPU is never idle by using its memory management
programs so that it can monitor the state of all system resources and active programs. One advantage of this is that it gives the
user the feeling that the CPU is working on multiple programs simultaneously.
Real-Time System: Dedicated embedded systems are real-time systems. The main job of the operating system here is to read and
react to sensor data and then provides a response in a fixed time period, therefore, ensuring good performance.
Distributive Environment: A distributive environment consists of many independent processors. The job of the operating
system here is to distribute computation logic among the physical processors and also at the same time manage communication
between them .Each processor has its own local memory, so they do not share a memory.
Interactivity: Interactivity is defined as the power of a user to interact with the system. The main job of the operating system
here is that it basically provides an interface for interacting with the system, manages I/O devices, and also ensures a fast
response time.
Spooling: Spooling is defined as the process of pushing the data from different I/O jobs into a buffer or somewhere in the
Usability: An operating system is designed to perform something and the instructiveness allows the user to manage the tasks
more or less in real-time. Security: Simple security policy enhancement. In non-interactive systems, the user virtually always
knows what their programs will do during their lifetime, thus allowing us to forecast and correct the bugs.
Tough to design: Depending on the target device, interactivity might be proved challenging to design because the user must be
prepared for every input. What about having many inputs? The state of a program can alternate at any particular time, all the
programs should be handled in some way, and also it doesn’t always work out properly.
Example of an Interactive Operating System:
● Unix Operating System
● Disk Operating System
What is a Multithreading Operating System?
A multithreaded operating system is an operating system that supports multiple threads of execution within a single process.
Threads are lightweight processes that share the same memory space, allowing for more efficient concurrent execution compared
to traditional heavyweight processes. In a multithreaded operating system, each thread within a process can execute
independently, performing different tasks simultaneously. This allows for better utilization of system resources such as CPU time
and memory, as well as improved responsiveness and throughput for applications.
Multithreading can provide several advantages, including:
Concurrency: Multiple threads can execute concurrently within a single process, allowing for better responsiveness and
improved performance, especially on multi-core processors.
Resource Sharing: Threads within the same process share resources such as memory and file descriptors, reducing overhead
compared to separate processes.
Simplified Programming: Multithreading can simplify programming by allowing developers to write concurrent code more
easily than with processes, as threads within the same process can communicate more directly and efficiently.
Efficient Communication: Threads within the same process can communicate through shared memory, message passing, or
other inter-thread communication mechanisms, allowing for efficient data exchange.
Multithreading Model:
Multithreading allows the application to divide its task into individual threads. In multi-threads, the same process or task can be
done by the number of threads, or we can say that there is more than one thread to perform the task in multithreading. With the
use of multithreading, multitasking can be achieved.
The main drawback of single threading systems is that only one task can be performed at a time, so to overcome the drawback of
this single threading, there is multithreading that allows multiple tasks to be performed.
In an operating system, threads are divided into the user-level thread and the Kernel-level thread. User-level threads
handled independent form above the kernel and thereby managed without any kernel support. On the opposite hand,
the operating system directly manages the kernel-level threads. Nevertheless, there must be a form of relationship
between user-level and kernel-level threads.
There exists three established multithreading models classifying these relationships are:
The many to one model maps many user levels threads to one kernel thread. This type of relationship facilitates an
effective context-switching environment, easily implemented even on the simple kernel with no thread support.
The disadvantage of this model is that since there is only one kernel-level thread schedule at any given time, this
model cannot take advantage of the hardware acceleration offered by multithreaded processes or multi-processor
systems. In this, all the thread management is done in the user space. If blocking comes, this model blocks the whole
system.
In the below figure, the many to one model associates all user-level threads to single kernel-level threads.
The one-to-one model maps a single user-level thread to a single kernel-level thread. This type of relationship
facilitates the running of multiple threads in parallel. However, this benefit comes with its drawback. The generation
of every new user thread must include creating a corresponding kernel thread causing an overhead, which can
hinder the performance of the parent process. Windows series and Linux operating systems try to tackle this problem
by limiting the growth of the thread count.
In the above figure, one model associates that one user-level thread to a single kernel-level thread
Many to Many Model multithreading model
In this type of model, there are several user-level threads and several kernel-level threads. The number of kernel
threads created depends upon a particular application. The developer can creates many threads at both levels but may
not be the same. The many to many model is compromise between the other two models. In this model, if any thread
makes a blocking system call, the kernel can schedule another thread for execution. Also, with the introduction of
multiple threads, complexity is not present as in the previous models. Though this model allows the creation of
multiple kernel threads, true concurrency cannot be achieved by this model. This is because the kernel can schedule
only one process at a time.
Many to many versions of the multithreading model associate several user-level threads to the same or much less
variety of kernel-level threads in the below figure.
Blocking If a thread in the kernel is blocked, it If a thread in the kernel is blocked, it does not block
Operation blocks all other threads in the same all other threads in the same process.
process.
Thread Its library includes the source code for The application code on kernel-level threads does
Management thread creation, data transfer, thread not include thread management code, and it is
destruction, message passing, and thread simply an API to the kernel mode.
scheduling.
Creation and It may be created and managed much It takes much time to create and handle.
Management faster.
Examples Some instances of user-level threads are Some instances of Kernel-level threads are Windows
Java threads and POSIX threads. and Solaris.
Operating System Any OS may support it. The specific OS may support it.
● It delivers better application performance because of the few interfaces between the application program and the
hardware.
● Easy for kernel developers to develop such an operating system.
● It can perform the fundamental operation
● It uses straightforward commands
● Lack of flexibility
● Layered Approach
In a layered approach, the OS consists of several layers where each layer has a well-defined functionality and each layer
is designed, coded and tested independently.
The layered structure approach breaks up the operating system into different layers and retains much more control on the
system. The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface. These layers
are so designed that each layer uses the functions of the lower-level layers only. It simplifies the debugging process
as if lower- level layers are debugged, and an error occurs during debugging. The error must be on that layer only as
the lower-level layers have already been debugged.
This allows implementers to change the inner workings and increases modularity.
As long as the external interface of the routines doesn't change, developers have more freedom to change the inner
workings of the routines.
The main advantage is the simplicity of construction and debugging. The main difficulty is defining the various layers.
The main disadvantage of this structure is that the data needs to be modified and passed on at each layer, which adds
overhead to the system. Moreover, careful planning of the layers is necessary as a layer can use only lower-level
layers. UNIX is an example of this structure.
Layering provides a distinct advantage in an operating system. All the layers can be defined separately and interact with
each other as required. Also, it is easier to create, maintain and update the system if it is done in the form of layers.
Change in one layer specification does not affect the rest of the layers.
Fig:1.7(layered architecture)
This type of operating system was created as an improvement over the early monolithic systems. The operating system is
split into various layers in the layered operating system, and each of the layers has different functionalities. There are some
rules in the implementation of the layers as follows.
A particular layer can access all the layers present below it, but it cannot access them. That is, layer n-1 can access all the
layers from n-2 to 0, but it cannot access the n th.
Layer 0 deals with allocating the processes, switching between processes when interruptions occur or the timer expires. It
also deals with the basic multiprogramming of the CPU.Thus if the user layer wants to interact with the hardware layer, the
response will be traveled through all the layers from n-1 to 1. Each layer must be designed and implemented such that it will
need only the services provided by the layers.
There are six layers in the layered operating system. A diagram demonstrating these layers is as follows:
Memory Management: Memory management deals with memory and moving processes from disk to primary memory for
execution and back again. This is handled by the third layer of the operating system. All memory management is associated
with this layer. There are various types of memories in the computer like RAM, ROM.
If you consider RAM, then it is concerned with swapping in and swapping out of memory. When our computer runs, some
processes move to the main memory (RAM) for execution, and when programs, such as calculators, exit, it is removed from
the main memory.
Process Management: This layer is responsible for managing the processes, i.e., assigning the processor to a process and
deciding how many processes will stay in the waiting schedule. The priority of the processes is also managed in this layer.
The different algorithms used for process scheduling are FCFS (first come, first served), SJF (shortest job first), priority
scheduling, round-robin scheduling, etc.
There are several advantages of the layered structure of operating system design, such as:
Modularity: This design promotes modularity as each layer performs only the tasks it is scheduled to perform.
Easy debugging: As the layers are discrete so it is very easy to debug. Suppose an error occurs in the CPU scheduling layer.
The developer can only search that particular layer to debug, unlike the Monolithic system where all the services are present.
Easy update: A modification made in a particular layer will not affect the other layers.
No direct access to hardware: The hardware layer is the innermost layer present in the design. So a user can use the
services of hardware but cannot directly modify or access it, unlike the Simple system in which the user had direct access to
the hardware.
Abstraction: Every layer is concerned with its functions. So the functions and implementations of the other layers are
abstract to it.
Though this system has several advantages over the Monolithic and Simple design, there are also some disadvantages, such
as:
Complex and careful implementation: As a layer can access the services of the layers below it, so the arrangement of the
layers must be done carefully. For example, the backing storage layer uses the services of the memory management layer. So
it must be kept below the memory management layer. Thus with great modularity comes complex implementation.
Slower in execution: If a layer wants to interact with another layer, it requests to travel through all the layers present
between the two interacting layers. Thus it increases response time, unlike the Monolithic system, which is faster than this.
Thus an increase in the number of layers may lead to a very inefficient design.
Functionality: It is not always possible to divide the functionalities. Many times, they are interrelated and can't be separated.
Lecture-6
Objectives of Kernel:
Inter-Process Communication
Interposes communication refers to how processes interact with one another. A process has several threads. In the kernel
space, threads of any process interact with one another. Messages are sent and received across threads using ports. At the
kernel level, there are several ports like process port, exceptional port, bootstrap port, and registered port. All of these
ports interact with user-space processes.
Memory Management
Memory management is the process of allocating space in main memory for processes. However, there is also the
creation of virtual memory for processes. Virtual memory means that if a process has a bigger size than the main
memory, it is partitioned into portions and stored. After that, one by one, every part of the process is stored in the main
memory until the CPU executes it.
CPU Scheduling
CPU scheduling refers to which process the CPU will execute next. All processes are queued and executed one at time.
Every process has a level of priority, and the process with the highest priority is performed out first. CPU scheduling
aids in optimizing CPU utilization. In addition, resources are being used more efficiently. It also minimizes the waiting
time. Waiting time shows that a process takes less time in the queue and that resources are allocated to the process more
quickly. CPU scheduling also reduces response and turnaround times.
Components of Microkernel
A microkernel contains only the system's basic functions. A component is only included in the microkernel if
putting it outside would disrupt the system's operation. The user mode should be used for all other non- essential
components. The minimum functionalities needed in the microkernel are as follows:
● In the microkernel, processor scheduling algorithms are also required. Process and thread schedulers are included.
● Address spaces and other memory management mechanisms should be incorporated in the microkernel.
Memory protection features are also included.
● Inter-process communication (IPC) is used to manage servers that execute their own address spaces.
Advantages
Disadvantages
● When the drivers are implemented as procedures, a context switch or a function call is needed.
● In a microkernel system, providing services are more costly than in a traditional monolithic system.
● The performance of a microkernel system might be indifferent and cause issues.
● Execution is slower
A monolithic design of the operating system architecture makes no special accommodation for the special nature of
the operating system. Although the design follows the separation of concerns, no attempt is made to restrict the
privileges granted to the individual parts of the operating system. The entire operating system executes with
maximum privileges. The communication overhead inside the monolithic operating system is the same as that of any
other software, considered relatively low.
CP/M and DOS are simple examples of monolithic operating systems. Both CP/M and DOS are operating systems
that share a single address space with the applications. In CP/M, the 16-bit address space starts with system
variables and the application area. It ends with three parts of the operating system, namely CCP (Console
Command Processor), BDOS (Basic Disk Operating System), and BIOS (Basic Input/output System).
In DOS, the 20-bit address space starts with the array of interrupt vectors and the system variables,
followed by the resident part of DOS and the application area and ending with memory block used by the
video card and BIOS.
Simple structure: This type of operating system has a simple structure. All the components needed for processing are embedded
into the kernel.
Works for smaller tasks: It works better for performing smaller tasks as it can handle limited resources.
Communication between components: All the components can directly communicate with each other and
also with the kernel.
Fast operating system: The code to make a monolithic kernel is very fast and robust
A kernel is the core part of an operating system, and it manages the system resources. A kernel is like a bridge between
the application and hardware of the computer. The kernel can be classified further into two categories, Microkernel and
Monolithic Kernel.
The microkernel is a type of kernel that allows customization of the operating system. It runs on privileged mode and
provides low-level address space management and Inter-Process Communication (IPC). Moreover, OS services such as
file system, virtual memory manager, and CPU scheduler are on top of the microkernel. Each service has its own
address space to make them secure. Besides, the applications also have their own address spaces. Therefore, there is
protection among applications, OS Services, and kernels.
• A monolithic kernel is another classification of the kernel. In monolithic kernel-based systems, each application
has its own address space. Like microkernel, this one also manages system resources between application and
hardware, but user services and kernel services are implemented under the same address space. It increases the
size of the kernel, thus increasing the size of the operating system as well.
• This kernel provides CPU scheduling, memory management, file management, and other system functions
through system calls. As both services are implemented under the same address space, this makes operating
Operating System (BCS401) Page 35
system execution faster
Definition A monolithic kernel is a type of kernel in A microkernel is a kernel type that provides
operating systems where the entire operating low-level address space management, threads
system works in the kernel space. management, and interposes communication to
implement an operating system.
Address space In a monolithic kernel, both user services In microkernel user services and kernel,
and kernel services are kept in the same services are kept in separate address spaces.
address space.
Size The monolithic kernel is larger than the The microkernel is smaller in size.
microkernel.
OS services In a monolithic kernel system, the kernel In a microkernel-based system, the OS services
contains the OS services. and kernel are separated.
Extendible The monolithic kernel is quite The microkernel is easily extensible.
complicated to extend.
Security If a service crashes, then the whole system If a service crashes, it does not affect the
crashes in a monolithic kernel. working of the microkernel.
Customization It is difficult to add new functionalities to the It is easier to add new functionalities to the
monolithic kernel. Therefore, it is not microkernel. Therefore, it is more
customizable. customizable.
Code Less coding is required to write A microkernel requires more coding.
monolithic kernel. a
Example Linux, FreeBSD, OpenBSD, NetBSD, QNX, Symbian, L4Linux, Singularity, K42,
Microsoft Windows (95, 98, Me), Solaris, Mac OS X, Integrity, PikeOS, HURD, Minix,
HP-UX, DOS, OpenVMS, XTS- 400, etc. and Coyotos.
The dual mode operations in the operating system protect the operating system from illegal users We accomplish
this defense by designating some of the system instructions as privileged instructions that can cause harm The
hardware only allows for the execution of privileged instructions in kernel mode An example of a privileged
instruction is the command to switch to user mode Other examples include monitoring of I/O, controlling timers and
handling interruptions. To ensure proper operating system execution, we must differentiate between machine code
execution and user defined code Most computer systems have embraced offering hardware support that helps
distinguish between different execution modes We have two modes of the operating system user mode and kernel
mode bit is required to identify in which particular mode the current instruction is executing If the mode bit is 1 it
operates user mode, and if the mode bit is 0 it operates in kernel mode NOTE At the booting time of the system, it
always starts with the kernel mode.
The operating system has two modes of operation to ensure it works correctly:
1. User Mode
2. Kernel Mode
When the computer system runs user applications like file creation or any other application program in the User Mode, this
mode does not have direct access to the computer's hardware. For performing hardware related tasks, like when the user
application requests for a service from the operating system or some interrupt occurs, in these cases, the system must switch
to Kernel Mode. The mode bit of the user mode is 1. This means that if the mode bit of the system's processor is 1, then the
system will be in the User Mode.
All the bottom level tasks of the Operating system are performed in the Kernel Mode. As the Kernel space has direct
access to the hardware of the system, the kernel mode handles all the processes which require hardware support.
Apart from this, the main functionality of the Kernel.
Mode is to execute privileged instructions. These privileged instructions are not provided with user access, and
that's why these instructions cannot be processed in the User mode. So, all the processes and instructions that the
user is restricted to interfere with are executed in the Kernel Mode of the Operating System. The mode bit for the
Kernel Mode is 0. So, for the system to function in the Kernel Mode, the Mode bit of the processor must be.
Certain types of processes are to be made hidden from the user, and certain tasks that do not require any type of
hardware support. Using the dual mode of the OS, these tasks can be dealt with separately. Also, the Operating
System needs to function in the dual mode because the Kernel Level programs perform all the bottom level functions
of the OS like process management, Memory management, etc. If the user alters these, then this can cause an entire
system failure. So, for specifying the access to the users only to the tasks of their use, Dual Mode is necessary for an
Operating system.
So, whenever the system works on the user applications, it is in the User mode. Whenever the user requests some
hardware services, a transition from User mode to Kernel mode occurs, and this is done by changing the mode bit
from 1 to 0. And for returning back into the User mode, the mode bit is again changed to 1.
User Mode and Kernel Mode Switching. In its lifespan, a process executes in user mode and kernel mode. The user
mode is a normal mode where the process has limited access. However, the kernel mode is the privileged mode
where the process has unrestricted access to system resources like hardware, memory, etc.
A process can access services like hardware I/O by executing accessing kernel data in kernel mode. Anything related
to process management, I/O hardware management, and memory management requires a process to execute in
Kernel mode.
This is important to know that a process in Kernel mode gets power to access any device and memory, and at the
same time any crash in kernel mode brings down the whole system. But any crash in user mode brings down the
faulty process only. The kernel provides System Call Interface (SCI), which are entry points for user processes to
enter kernel mode. System calls are the only way through which a process can go into kernel mode from user mode.
The below diagram explains user mode to kernel mode switching in detail.
With the mode bit, we can distinguish between a task executed on behalf of the operating system and one executed on
behalf of the user. When the computer system executes on behalf of a user application, the system is in user mode
However, when a user application requests a service from the operating system via a system call, it must transition
from user to kernel mode to fulfill the request. As we can say, this architectural enhancement is useful for many other
aspects of system operation. At system boot time, the hardware starts in kernel mode. The operating system is then
loaded and starts user applications in user mode whenever a trap or interrupt occurs, the hardware switches from user
mode to kernel mode, changing the mode bit's state to Thus, whenever the operating system gains control of the
computer, it is in kernel mode. The system always switches to user mode by setting the mode bit to 1 before passing
control to a user program.
A system call is a way for a user program to interface with the operating system. The program requests several
services, and the OS responds by invoking a series of system calls to satisfy the request. A system call can be written
in assembly language or a high-level language like C or Pascal. System calls are predefined functions that the
operating system may directly invoke if a high-level language is used. A system call is a method for a computer
program to request a service from the kernel of the operating system on which it is running. A system call is a
method of interacting with the operating system via programs. A system call is a request from computer software to
an operating system's kernel.
The Application Program Interface (API) connects the operating system's functions to user programs. It acts as a
link between the operating system and a process, allowing user-level programs to request operating system services.
The kernel system can only be accessed using system calls. System calls are required for any programs that use
resources.
How is system calls made?
When computer software needs to access the operating system's kernel, it makes a system call. The system call uses
an API to expose the operating system's services to user programs. It is the only method to access the kernel system.
All programs or processes that require resources for execution must use system calls, as they serve as an interface
between the operating system and user programs.
Below are some examples of how a system call varies from a user function.
• A system call function may create and use kernel processes to execute the asynchronous processing.
• A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege executes
in the kernel protection domain.
• System calls are not permitted to use shared libraries or any symbols that are not present in the kernel protection
domain.
• The code and data for system calls are stored in global kernel memory.
There are various situations where you must require system calls in the operating system. Following of the
situations are as follows:
• It is required when a file system wants to create or delete a file.
• Network connections require the system calls to send and receive data packets.
• If you want to read or write a file, you need to make system calls.
• If you want to access hardware devices, including a printer, scanner, you need a system call. System calls
are used to create and manage new processes.
The Applications run in an area of memory known as user space. A system call connects to the operating
system's kernel, which executes in kernel space. When an application creates a system call, it must first obtain
permission from the kernel. It achieves this using an interrupt request, which pauses the current process and
transfers control to the kernel. If the request is permitted, the kernel performs the requested action, like creating
or deleting a file. As input, the application receives the kernel's output. The application resumes the procedure
after the input is received. When the operation is finished, the kernel returns the results to the application and
then moves data from kernel space to user space in memory.
A simple system call may take a few nanoseconds to provide the result, like retrieving the system date and
time. A more complicated system call, such as connecting to a network device, may take a few seconds. Most
operating systems launch a distinct kernel thread for each system call to avoid bottlenecks. Modern operating
systems are multi-threaded, which means they can handle various system calls at the same time.
Types of System Calls Process Control
Process control is the system call that is used to direct the processes. Some process control
examples include
creating, load, abort, and end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management examples include
creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of device management
include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some examples of
information maintenance, including getting system data, set time or date, get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples of communication,
including creates, delete communication connections, send, receive messages, etc.
Definition User Mode is a restricted mode, which the Kernel Mode is the privileged mode,
application programs are executing and starts. which the computer enters when
accessing hardware resources.
Modes User Mode is considered as the slave mode or the Kernel mode is the system mode, master
restricted mode. mode or the privileged mode.
Address Space In User mode, a process gets its own address space. In Kernel Mode, processes get a single
address space.
Interruptions In User Mode, if an interrupt occurs, only one In Kernel Mode, if an interrupt occurs,
process fails. the whole operating system might fail.
Restrictions In user mode, there are restrictions to access kernel In kernel mode, both user programs and
programs. Cannot access them directly. kernel programs can access.
6. What is the real-time operating system? What is the difference between hard real-time and
soft real-time operating systems?
8. What do you understand by system call? Enumerate five system calls used for
process management.
Unit-2:(Concurrent Processes)
Syllabus:
Concurrent Processes: Process Concept, Principle of Concurrency, Producer / Consumer
Problem, Mutual Exclusion, Critical Section Problem, Dekker’s solution, Peterson’s solution,
Semaphores, Test and Set operation; Classical Problem in Concurrency- Dining Philosopher
Problem, Sleeping Barber Problem; Inter Process Communication models and Schemes, Process
generation.
Stack
1 The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
Text
3 This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.
4 Data
This section contains the global and static variables.
Process Life Cycle
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
1 Start
This is the initial state when a process is first started/created.
Running
3 Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
Waiting
4 Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
Terminated or Exit
5 Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from
main memory.
Process State
1 The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
3 Process ID
Unique identification for each of the process in the operating system.
4 Pointer
A pointer to parent process.
Program Counter
5 Program Counter is a pointer to the address of the next instruction to be
executed for this process.
CPU registers
6 Various CPU registers where process need to be stored for execution for
running state.
Accounting information
9 This includes the amount of CPU used for process execution, time limits,
execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −
Fig:2.3
Operations on Processes
There are many operations that can be performed on processes. Some of these are process
creation, process preemption, process blocking, and process termination. These are given in
detail as follows −
Process Creation
Processes need to be created in the system for different operations. This can be done by the
following events −
Whenever the process finishes executing its final statement and asks the operating system to
delete it by using exit() system call.
At that point of time the process may return the status value to its parent process with the help of
wait() system call.
All the resources of the process including physical and virtual memory, open files, I/O buffers
are deallocated by the operating system.
The reasons that the process may terminate the execution of one of its children are as follows −
● The child exceeds its usage of resources that it has been allocated.
● The task that is assigned to the child is no longer required.
Operating System (BCS401) Page 50
● The parent is exiting and the operating system does not allow a child to continue if its
parent terminates.
Some systems, including VMS, do not allow a child to exist if its parent has terminated. In such
systems, if a process terminates either normally or abnormally, then all its children have to be
terminated. This concept is referred to as cascading termination.
Inter-process Communication:
Processes executing concurrently in the operating system may be either independent processes or
cooperating processes. A process is independent if it cannot affect or be affected by the other
processes executing in the system. Any process that does not share data with any other process is
independent. A process is cooperating if it can affect or be affected by the other processes
executing in the system. So, any process that shares data with other processes is a cooperating
process.
There are several reasons for providing an environment that allows process cooperation:
∙ Information sharing
∙ Computation speedup
∙ Modularity
∙ Convenience
Cooperating processes require an inter-process communication (IPC) mechanism that will allow
them to exchange data and information. These are some fundamental models of inter-process
communication:
1) shared memory
2) message passing
3) Naming
4) Synchronization
5) Buffering
Shared Memory:
In the shared-memory model, a region of memory that is shared by cooperating processes is
established. A shared-memory region resides in the address space of the process creating the
shared-memory segment. Other processes that wish to communicate using this shared-memory
segment must attach it to their address space. Processes can then exchange information by
reading and writing data to the shared region. The form of the data and the location are
Message passing:
In the message-passing model, communication takes place by means of messages exchanged
between the cooperating processes. Message passing provides a mechanism to allow processes to
communicate and to synchronize their actions without sharing the same address space and is
particularly useful in a distributed environment, where the communicating processes may reside
on different computers connected by a network. For example, a chat program.
Message passing is slower than Shared memory, as message-passing systems are typically
implemented using system calls and thus require the more time-consuming task of kernel
intervention.
Message passing is useful for exchanging smaller amounts of data and is also easier to
implement than shared memory.
The actual function of message-passing is normally provided in the form of a pair of primitives:
Lecture:9
Principles of concurrency:
A cooperating process is one that can affect or be affected by other processes executing in the
system. Cooperating processes can either directly share a logical address space (that is, both code
and data) or be allowed to share data only through files or messages.
Concurrent access to shared data may result in data inconsistency. To achieve the consistency of
shared data we need to understand the principles of concurrency which are given below:
Race Condition:
1∙ A race condition occurs when multiple processes or threads read and write data items so that
the final result depends on the order of execution of instructions in the multiple processes. Let us
consider two simple examples:
− Suppose that two processes, P1 and P2, share the global variable X. At some point in
its execution, P1 updates X to the value 1, and at some point in its execution, P2 updates X to the
value 2. Thus, the two tasks are in a race to write variable X. In this example the “loser” of the
race (the process that updates last) determines the final value of X.
− Consider two process, P3 and P4, that share global variables b and c, with initial values
b = 1 and c = 2. At some point in its execution, P3 executes the assignment b = b + c, and at
some point in its execution, P4 executes the assignment c = b + c. Note that the two processes
update different variables. However, the final values of the two variables depend on the order in
which the two processes execute these two assignments. If P3 executes its assignment statement
first, then the final values are b = 3 and c = 5. If P4 executes its assignment statement first, then
the final values are b = 4 and c = 3.
Operating System Concerns:
Design and management issues for concurrency are as follows:
This algorithm is known as the bakery algorithm as this type of scheduling is adopted in bakeries
where token numbers are issued to set the order of customers. When a customer enters a bakery
store, he gets a unique token number on its entry. The global counter displays the number of
customers currently being served, and all other customers must wait at that time. Once the baker
finishes serving the current customer, the next number is displayed. The customer with the next
token is now being served.
Similarly, in Lamport's bakery algorithm, processes are treated as customers. In this, each
process waiting to enter its critical section gets a token number, and the process with the lowest
number enters the critical section. If two processes have the same token number, the process with
a lower process ID enters its critical section.
Explanation –
1. boolean entering[n];
2. int number[n];
All entering variables are initialized to false, and n integer variables numbers are all initialized to
0. The value of integer variables is used to form token numbers.
When a process wishes to enter a critical section, it chooses a greater token number than any
earlier number.
entering[i] to true to make other processes aware that it is choosing a token number. It then
chooses a token number greater than those held by other processes and writes its token number.
Then it sets entering[i] to false after reading them. Then It enters a loop to evaluate the status of
other processes. It waits until some other process Pj is choosing its token number.
Pi then waits until all processes with smaller token numbers or the same token number but with
higher priority are served fast.
When the process has finished with its critical section execution, it resets its number variable to
0.
The Bakery algorithm meets all the requirements of the critical section problem.
Lecture:11
What is Mutual Exclusion?
The algorithm uses flags to indicate the intention of each process to enter a critical section, and a
turn variable to determine which process is allowed to enter the critical section first.
1st Attempt
A process wishing to execute its critical section first examines the contents of turn (a global
memory location). If the value of turn is equal to the number of the process, then the process may
proceed to its critical section. Otherwise, it is forced to wait. Waiting process repeatedly reads
the value of turn until it is allowed to enter its critical section. This procedure is known as busy
waiting or spin waiting, because the waiting process do nothing productive and consumes
processor time while waiting for its chance.
This solution guarantees the mutual exclusion property but has drawbacks:
− If one process fails, the other process is permanently blocked. This is true whether a process
fails in its critical section or outside of it.
2nd Attempt
The flaw in the first attempt is that it stores the name of the process that may enter its critical
section and if one process fails, the other process is permanently blocked. To overcome this
problem a Boolean vector flag is defined.
If one process wants to enter its critical section it first checks the other process flag until that flag
has the value false, indicating that the other process is not in its critical section. The checking
In this solution if one process fails outside the critical section including the flag setting code then
the other process is not blocked because in this condition flag of the other process is always
false. However, this solution has two drawbacks:
− If one process fails inside its critical section or after setting its flag to true just before entering
its critical section, then the other process is permanently blocked.
− It does not guarantee mutual exclusion.
3rd Attempt
Because a process can change its state after the other process has checked it before the other
process can enter its critical section, the second attempt failed. Perhaps we can fix this problem
with a simple interchange of two statements as:
This solution guarantees mutual exclusion for example consider, if P0 sets flag [0] to true, P1
cannot enter its critical section. If P1 already in its critical section P0 sets its flag then P0 will be
blocked by the while statement.
Problem:
This solution guarantees mutual exclusion but is still flawed. Consider the following sequence of
events: P0 sets flag [0] to true. P1 sets flag [1] to true. P0 checks flag [1]. P1 checks flag [0]. P0
sets flag [0] to false. P1 sets flag [1] to false. P0 sets flag [0] to true. P1 sets flag [1] to true.
This sequence could be extended indefinitely, and neither process could enter its critical section.
A Correct Solution
The integer value of the semaphore in the wait () and signal () operations must be executed
indivisibly. That is, when one process modifies the semaphore value, no other process can
simultaneously modify that same semaphore value.
This situation is a critical section problem and can be resolved in either of two ways:
1) By using Counting Semaphore
2) By using Binary Semaphore
1) Counting Semaphore
∙ The value of a counting semaphore can range over an unrestricted domain.
∙ It is also known as general semaphore.
Counting semaphores can be used to control access to a given resource consisting of a finite
number of instances. The semaphore is initialized to the number of resources available. Each
process that wishes to use a resource performs a wait() operation on the semaphore (thereby
decrementing the count). When a process releases a resource, it performs a signal() operation
(incrementing the count). When the count for the semaphore goes to 0, all resources are being
used. After that, processes that wish to use a resource will block until the count becomes greater
than 0.
2)Binary Semaphore
∙ The value of a binary semaphore can range only between 0 and 1.
∙ binary semaphores are known as mutex locks, as they are locks that provide mutual exclusion.
∙ In this, queue is used to hold the processes waiting on the semaphore.
Mutex
Mutex is a specific kind of binary semaphore that is used to provide a locking mechanism. It
stands for Mutual Exclusion Object. Mutex is mainly used to provide mutual exclusion to a
specific portion of the code so that the process can execute and work with a particular section of
the code at a particular time.
Mutex uses a priority inheritance mechanism to avoid priority inversion issues. The priority
inheritance mechanism keeps higher-priority processes in the blocked state for the minimum
possible time. However, this cannot avoid the priority inversion problem, but it can reduce its
effect up to an extent.
Advantages of Mutex
● No race condition arises, as only one process is in the critical section at a time.
● Data remains consistent and it helps in maintaining integrity.
● It’s a simple locking mechanism that can be obtained by a process before entering into a
critical section and released while leaving the critical section.
Disadvantages of Mutex
● If after entering into the critical section, the thread sleeps or gets preempted by a high-
priority process, no other thread can enter into the critical section. This can lead to starvation.
● When the previous thread leaves the critical section, then only other processes can enter into
it, there is no other mechanism to lock or unlock the critical section.
● Implementation of mutex can lead to busy waiting that leads to the wastage of the CPU
cycle.
Lecture:14
Operating System (BCS401) Page 67
Producer-Consumer Problem:
There is one Producer in the producer-consumer problem, Producer is producing some items,
whereas there is one Consumer that is consuming the items produced by the Producer. The same
memory buffer is shared by both producers and consumers which is of fixed-size.
The task of the Producer is to produce the item, put it into the memory buffer, and again start
producing items. Whereas the task of the Consumer is to consume the item from the memory
buffer.
Below are a few points that considered as the problems occur in Producer-Consumer:
o The producer should produce data only when the buffer is not full. In case it is found that
the buffer is full, the producer is not allowed to store any data into the memory buffer.
o Data can only be consumed by the consumer if and only if the memory buffer is not
empty. In case it is found that the buffer is empty, the consumer is not allowed to use any
data from the memory buffer.
o Accessing memory buffer should not be allowed to producer and consumer at the same
time.
This scheme allows at most BUFFER_SIZE - 1 items in the buffer at the same time.
To overcome the limitation of BUFFER_SIZE – 1, we add an integer variable counter, which is
initialized to 0. Counter is incremented every time we add a new item to the buffer and is
decremented every time we remove one item from the buffer. The code for the producer process
can be modified as follows:
Although this solution guarantees that no two neighbours are eating simultaneously, but it could
create a deadlock.
8. Give the principles, mutual exclusion in critical section problem. Also discuss how well
these principles are followed in Dekker’s solution.
LECTURE 18
Process Concept:
What is a Process:
Process States:
Fig:3.1
⮚ Halted : The process has finished and is about to leave the system.
⮚ LECTURE 19
• Register values: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program counter,
this state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward.
If we have a single processor in our system, there is only one running process at a time.
Other ready processes wait for the processor.
LECTURE 20
Operations on process:
A. Process Creation
● Parent process create children processes, which, in turn create other processes,
forming a tree of processes
● Resource sharing
● Parent and children share all resources
● Children share subset of parent’s resources
● Execution
i. Parent and children execute concurrently
B. Parent waits until children terminate
C. Process Termination
● Process executes last statement and asks the operating system to delete it (exit)
i. Output data from child to parent (via wait)
ii. Process’ resources are deallocated by operating system
● Parent may terminate execution of children processes (abort)
i. Child has exceeded allocated resources
ii. Task assigned to child is no longer required
iii. If parent is exiting
Some operating system do not allow child to continue if
its parent terminates
● A memory address is a given value within the address space, such as 4021f000. The
process can access a memory address only in a valid memory area. Memory areas
have associated permissions, such as readable, writable, and executable, that
the associated process must respect. If a process accesses a memory address not in
a valid memory area, or if it accesses a valid area in an invalid manner, the kernel
kills the process with the dreaded "Segmentation Fault" message.
LECTURE 22
Schedulers: A process migrates among the various scheduling queues throughout its
lifetime. The operating system must select, for scheduling purposes, processes from
Medium-Term Scheduler:
With this scheme, the process that requests the CPU first is allocated the CPU first. The
implementation of the FCFS policy is easily managed with a FIFO queue. When a
process enters the ready queue, its PCB is linked onto the tail of the queue. When the
CPU is free, it is allocated to the process at the head of the queue. The running process
is then removed from the queue.
Example:
Process p1,p2,p3,p4,p5 having arrival time of 0,2,3,5,8 microseconds and processing
time 3,3,1,4,2 microseconds, Draw Gantt Chart & Calculate Average Turn Around
Time, Average Waiting Time, CPU Utilization & Throughput using FCFS.
Processes Arrival Time Processing Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 0 3 3-0=3 3-3=0
P2 2 3 6-2=4 4-3=1
P3 3 1 7-3=4 4-1=3
P4 5 4 11-5=6 6-4=2
P5 8 2 13-8=5 5-2=3
GANTT CHART:
LECTURE 23
B. Shortest-Job-First Scheduling (SJF):
● Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time
● Two schemes:
i. nonpreemptive – once CPU given to the process it cannot be preempted
until completes its CPU burst
ii. preemptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This scheme is
known as the Shortest-Remaining-Time-First (SRTF)
● SJF is optimal – gives minimum average waiting time for a given set of processes
Example:
Process p1,p2,p3,p4 having burst time of 6,8,7,3 microseconds. Draw Gantt Chart &
Calculate Average Turn Around Time, Average Waiting Time, CPU Utilization &
Throughput using SJF.
Processes Burst Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P4 3 3-0=3 3-3=0
P1 6 9-0=9 9-6=3
P3 7 16-0=16 16-7=9
P2 8 24-0=24 24-8=16
GANTT CHART
Priority Scheduling:
● A priority number (integer) is associated with each process
● The CPU is allocated to the process with the highest priority
(smallest integer highest priority)
● Problem Starvation – low priority processes may never execute
● Solution Aging – as time progresses increase the priority of the process
GANTT CHART:
P2 P5 P1 P3 P4
0 1 6 16 18 19
Round-Robin Scheduling:
● Each process gets a small unit of CPU time (time quantum), usually 10-100
milliseconds. After this time has elapsed, the process is preempted and added to
the end of the ready queue.If there are n processes in the ready queue and the
time quantum is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q time units.
● Used for time sharing & multiuser O.S.
● FCFS with preemptive scheduling.
Example:
Process p1,p2,p3 having processing time of 24,3,3 milliseconds.
Draw Gantt Chart & Calculate Average Turn Around Time, Average Waiting
Time, CPU Utilization & Throughput using Round Robin with time slice of
4milliseconds.
Processes Processing T.A.T. W.T.
Time
T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 24 30-0=30 30-24=6
P2 3 7-0=7 7-3=4
GANTT CHART
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Average T.A.T. =(30+7+7)/3 = 44/3 = 14.67 millisecond
Average W.T. = (6+4+7)/3 =17/3 = 5.67 millisecond
Problem 1: Process p1,p2,p3 having burst time of 24,3,3 microseconds. Draw Gantt
Chart & Calculate Average Turn Around Time, Average Waiting Time, CPU Utilization
& Throughput using FCFS.
[Ans. Average TAT = 27 microsecond, Average WT = 17 microseconds, CPU
ii) If
the scheduler takes 0.2 unit of CPU Time in context switch for the completed job & o.1
unit of additional CPU time for incomplete jobs for saving their context, calculate the
percentage of CPU time wasted in each case.
Problem 3: Processes A,B,C,D,E having arrival time 0,0,1,2,2 and execution time
10,2,3,1,4 and priority 3,1,3,5,2. Draw the Gantt Chart and find average waiting time
and response time of the process set.
Problem 4: Process p1,p2,p3 having burst time 7,3,9 and priority 1,2,3 and arrival time
0,4,7.
Calculate turn around time and average waiting time using
i) SJF
ii) priority. (both preemptive)
problem 5: Process p1,p2,p3,p4 having arrival time 0,1,2,3 and burst time 8,4,9,5.
Calculate turn around time and waiting time using SJF, FCFS.
LECTURE 26
Deadlock: A set of blocked processes each holding a resource and waiting to acquire a resource
held by another process in the set.
System Model:
A system consists of a finite number of resources to be distributed among a number
of competing processes. The resources are partitioned into several types, each
consisting of some
number of identical instances. Resources cycles, files, and
are like Memory space, CPU devices (such
as printers and DVD drives).
LECTURE 27
● Preempted resources are added to the list of resources for which the
process is waiting.
● Process will be restarted only when it can regain its old resources, as well
as the new ones that it is requesting.
iv. Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.
LECTURE 27
Deadlock Avoidance: Requires that the system has some additional a priori information
available.
● Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
● The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition.
● Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes.
A. Safe State:
● When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state.
● System is in safe state if there exists a
sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for each
Pi, the resources that Pi can still request can
be satisfied by currently available resources
+ resources held by all the Pj, with j < i.
● That is:
i. If Pi resource needs are not
immediately available, then Pi can
wait until all Pj have finished.
ii. When Pj is finished, Pi can obtain
needed resources, execute, return
B. Avoidance Algorithm
A. Single instance of a resource type: Use a resource-allocation graph
B. Multiple instances of a resource type: Use the banker’s algorithm
● Banker’s Algorithm
Deadlock Detection
In this environment, the system must provide:
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred
• An algorithm to recover from the deadlock.
LECTURE 28
A. Process Termination:
B. Resource Preemption:
1 Explain threads
2 What do you understand by Process? Explain various states of process with suitable diagram. Explain process
control block.
3 What is a deadlock? Discuss the necessary conditions for deadlock with examples
4 Describe Banker’s algorithm for safe allocation.
5 What are the various scheduling criteria for CPU scheduling
6 What is the use of inter process communication and context switching
7 Discuss the usage of wait-for graph method
8
Consider the following snapshot of a system:
Maximu Availabl
Allocated m e
Process R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 2 2 3 3 6 8 7 7 10
P2 2 0 3 4 3 3
P3 1 2 4 3 4 4
P2 1 4
P3 2 9
P4 3 5
What is the average waiting and turn around time for these process with:
FCFS Scheduling
Preemptive SJF Scheduling
P2 1 4
P3 2 9
P4 3 5
Draw Gantt chart and find the average waiting time and average turnaround time:
P3 2 5 2
P4 3 8 4
Draw Gantt chart and find the average waiting time and average turnaround time:
(i) SRTF Scheduling
(ii Round robin (time
) quantum:3)
Memory Management: Basic bare machine, Resident monitor, Multiprogramming with fixed
partitions, Multiprogramming with variable partitions, Protection schemes, Paging, Segmentation, Paged
segmentation, Virtual memory concepts, Demand paging, Performance of demand paging, Page
replacement algorithms, Thrashing, Cache memory organization, Locality of reference
LECTURE 29
Address binding of instructions and data to memory addresses can happen at three different
stages.
1. Compile time: The compile time is the time taken to compile the program or source
code. During compilation, if memory location known a priori, then it generates
absolute codes.
2. Load time: It is the time taken to link all related program file and load into the main
memory. It must generate relocatable code if memory location is not known at
compile time.
3. Execution time: It is the time taken to execute the program in main memory by
processor. Binding delayed until run time if the process can be moved during its
execution from one memory segment to another. Need hardware support for address
maps (e.g., base and limit registers).
Dynamic Loading
● It loads the program and data dynamically into physical memory to obtain better
memory- space utilization.
● With dynamic loading, a routine is not loaded until it is called.
● The advantage of dynamic loading is that an unused routine is never loaded.
● This method is useful when large amounts of code are needed to handle infrequently
occurring cases, such as error routines.
● Dynamic loading does not require special support from the operating system.
Dynamic Linking
● Linking postponed until execution time.
● Small piece of code (stub) used to locate the appropriate memory-resident library routine.
● Stub replaces itself with the address of the routine and executes the routine.
● Operating system needed to check if routine is in processes memory address.
● Dynamic linking is particularly useful for libraries.
LECTURE 30
Overlays
● Keep in memory only those instructions and data that are needed at any given time.
● Needed when process is larger than amount of memory allocated to it.
● Implemented by user, no special support needed from operating system, programming
design of overlay structure is complex.
LECTURE:31
MEMORY ALLOCATION
The main memory must accommodate both the operating system and the various user processes.
We need to allocate different parts of the main memory in the most efficient way possible. The
main memory is usually divided into two partitions: one for the resident operating system, and
one for the user processes. We may place the operating system in either low memory or high
memory. The major factor affecting this decision is the location of the interrupt vector. Since the
interrupt vector is often in low memory, programmers usually place the operating system in low
memory as well.
There are following two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Noncontiguous memory allocation
1. Contiguous Memory Allocation- Here, all the processes are stored in contiguous memory
locations. To load multiple processes into memory, the Operating System must divide
memory into multiple partitions for those processes.
2. Hardware Support- The relocation-register scheme used to protect user processes from
each other, and from changing operating system code and data. Relocation register
contains value of smallest physical address of a partition and limit register contains range
of that partition.
According to size of partitions, the multiple partition schemes are divided into two types:
ii. Multiple Variable Partitions- With this partitioning, the partitions are of
variable length and number. When a process is brought into main memory, it is allocated
exactly as much memory, as it requires and no more.
Advantages:
● No internal fragmentation and more efficient use of main memory.
Disadvantages:
● Inefficient use of processor due to the need for compaction to counter external
fragmentation.
iii. Partition Selection policy- When the multiple memory holes (partitions) are
large enough to contain a process, the operating system must use an algorithm to select
in which hole the process will be loaded. The partition selection algorithm are as
follows:
● First-fit: The OS looks at all sections of free memory. The process is allocated to
the first hole found that is big enough size than the size of process.
● Next Fit: The next fit search starts at the last hole allocated and The process is
allocated to the next hole found that is big enough size than the size of process.
● Best-fit: The Best Fit searches the entire list of holes to find the smallest hole
that is big enough size than the size of process.
● Worst-fit: The Worst Fit searches the entire list of holes to find the largest hole
that is big enough size than the size of process.
1. External Fragmentation- The total memory space exists to satisfy a request, but it is not
contiguous. This wasted space not allocated to any partition is called external fragmentation.
The external fragmentation can be reduce by compaction. The goal is to shuffle the memory
contents to place all free memory together in one large block. Compaction is possible only if
relocation is dynamic, and is done at execution time.
2. Internal Fragmentation- The allocated memory may be slightly larger than requested
memory. The wasted space within a partition is called internal fragmentation. One method to
reduce internal fragmentation is to use partitions of different size.
LECTURE 32
PAGING
Main memory is divided into a number of equal-size blocks, are called frames. Each process is
divided into a number of equal-size block of the same length as frames, are called Pages. A
process is loaded by loading all of its pages into available frames (may not be contiguous).
Where p is an index into the page table and d is the displacement within the page.
Example: Consider a page size of 4 bytes and a physical memory of 32 bytes (8 pages), we show
how the user's view of memory can be mapped into physical memory. Logical address 0 is page
0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus, logical address 0
maps to physical address 20 (= (5 x 4) + 0). Logical address 3 (page 0, offset 3) maps to physical
address 23 (= (5 x 4) + 3). Logical address 4 is page 1, offset 0; according to the page table, page
1 is mapped to frame6. Thus, logical address 4 maps to physical address 24 (= (6 x 4) + 0).
Logical address 13 maps to physical address 9(= (2 x 4)+1).
The percentage of times that a particular page number is found in the TLB is called the
hit ratio. The effective access time (EAT) is obtained as follows:
LECTURE:34
Where pi is an index into the outer page table, and p2 is the displacement within the page
of the outer page table.
2. Hashed Page Tables- This scheme is applicable for address space larger than 32bits.
In this scheme, the virtual page number is hashed into a page table. This page table
contains a chain of elements hashing to the same location. Virtual page numbers are
compared in this chain searching for a match. If a match is found, the corresponding
physical frame is extracted.
Fig:4.9
Shared Pages
Shared code
● One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
● Shared code must appear in same location in the logical address space of all processes.
Private code and data
● Each process keeps a separate copy of the code and data.
● The pages for the private code and data can appear anywhere in the logical address
space.
Problem-01:
Calculate the size of memory if its address consists of 22 bits and the memory is 2-byte addressable.
We have-
● Number of locations possible with 22 bits = 222 locations
● It is given that the size of one location = 2 bytes
Problem-02:
Calculate the number of bits required in the address for memory having size of 16 GB. Assume the
memory is 4-byte addressable.
Let ‘n’ number of bits are required. Then, Size of memory = 2n x 4 bytes. Since, the given memory has size of 16 GB,
so we have-
2n x 4 bytes = 16 GB
2n x 4 = 16 G
2n x 22 = 234
2n = 232
∴ n = 32 bits
Problem-03:
Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is 4 KB, what is the
approximate size of the page table?
Given-
● Size of main memory = 64 MB
● Number of bits in virtual address space = 32 bits
● Page size = 4 KB
We will consider that the memory is byte addressable.
= 64 MB
= 226 B
Thus, Number of bits in physical address = 26 bits
Number of Frames in Main Memory-
Number of frames in main memory
= Size of main memory / Frame size
= 64 MB / 4 KB
= 226 B / 212 B
= 214
Thus, Number of bits in frame number = 14 bits
We have,
Page size
= 4 KB
= 212 B
Thus, Number of bits in page offset = 12 bits
So, Physical address is-26 BITS
Process Size-
LECTURE 35
SEGMENTATION
Segmentation is a memory-management scheme that supports user view of memory. A program is
a collection of segments. A segment is a logical unit such as: main program, procedure, function,
method, object, local variables, global variables, common block, stack, symbol table, arrays etc.
A logical-address space is a collection of segments. Each segment has a name and a length. The
user specifies each address by two quantities: a segment name/number and an offset.
Hence, Logical address consists of a two tuple: <segment-number, offset> Segment table maps
two-dimensional physical addresses and each entry in table has: base – contains the starting
physical address where the segments reside in memory. Limit specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program.
The segment number is used as an index into the segment table. The offset d of the
logical address must be between 0 and the segment limit. If it is not, we trap to the
Fig:4.11 (Segmentation)
LECTURE 36
VIRTUAL MEMORY
Virtual memory is a technique that allows the execution of processes that may not be
completely in memory. Only part of the program needs to be in memory for execution. It
means that Logical address space can be much larger than physical address space.
Virtual memory allows processes to easily share files and address spaces, and it provides
an efficient mechanism for process creation.
Virtual memory is the separation of user logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided for programmers
when only a smaller physical memory is available. Virtual memory makes the task of
programming much easier, because the programmer no longer needs to worry about the
amount of physical memory available.
● Demand paging
● Demand segmentation
SOLUTION:
Segment 2 is 400 bytes long and begins at location 4300. Thus in this case a reference to
byte 53 of segment 2 is mapped onto the location 4300 (4300+53=4353). A reference to
segment 3, byte 85 is mapped to 3200(the base of segment 3)+852=4052.
A reference to byte 1222 of segment 0 would result in the trap to the OS, as the length of
this segment is 1000 bytes.
Example of Segmentation
Fig:4.1(segmentation)
DEMAND PAGING
A lazy swapper never swaps a page into memory unless that page will be needed. A swapper
manipulates entire processes, whereas a pager is concerned with the individual pages of a
process.
When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings only
those necessary pages into memory. Thus, it avoids reading into memory pages that will not be
used anyway, decreasing the swap time and the amount of physical memory needed.
Page Table-
● The valid-invalid bit scheme of Page table can be used for indicating which pages are
currently in memory.
● When this bit is set to "valid", this value indicates that the associated page is both legal and
in memory. If the bit is set to "invalid", this value indicates that the page either is not valid or
is valid but is currently on the disk.
Fig :4.14 (Page table when some pages are not in main memory)
When a page references an invalid page, then it is called Page Fault. It means that page is
not in main memory. The procedure for handling page fault is as follows:
1. We check an internal table for this process, to determine whether the reference
was a valid or invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we
have not yet brought in that page in to memory.
3. We find a free frame (by taking one from the free-frame list).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the
process and the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap. The
process can now access the page as though it had always been in memory.
Note: The pages are copied into memory, only when they are required. This mechanism is
called Pure Demand Paging.
Let p be the probability of a page fault (0< p < 1). Then the effective access time is
Effective access time = (1 - p) x memory access time + p x page fault time
In any case, we are faced with three major components of the page-fault service time:
1. Service the page-fault interrupt.
2. Read in the page.
LECTURE 38
PAGE REPLACEMENT
The page replacement is a mechanism that loads a page from disc to memory when a page
of memory needs to be allocated. Page replacement can be described as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:
The page replacement algorithms decide which memory pages to page out (swap out,
write to disk) when a page of memory needs to be allocated. We evaluate an algorithm by
running it on a particular string of memory references and computing the number of page
faults. The string of memory references is called a reference string. The different page
replacement algorithms are described as follows:
Example-1 Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Find the number
of page faults.
● Initially, all slots are empty, so when 1, 3, 0 came they are allocated to
the empty slots —>3 Page Faults.
● When 3 comes, it is already in memory so —>0 Page Faults.
● Then 5 comes, it is not available in memory so it replaces the oldest
page slot i.e 1.—>1 Page Fault.
● 6 comes, it is also not available in memory so it replaces the oldest page slot i.e. 3 —
>1 Page Fault.
● Finally, when 3 come it is not available so it replaces 0, 1 page fault
LECTURE-39
In this algorithm, pages are replaced which would not be used for the longest duration
of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page
frame. Find number of page fault.
Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —>4 Page faults
0 is already there so —>0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.—>1 Page fault.
0 is already there so —>0
Page fault.. 4 will takes place
of 1 —>1 Page Fault.
Now for the further page reference string —>0 Page fault because they are already
available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.
Now for the further page reference string —>0 Page fault because they are already
available in the memory.
LECTURE-40
It can keep an 8-bit(1 byte) for each page in a page table in memory. At regular intervals, a
timer interrupt transfers control to the operating system. The operating system shifts the
reference bit for each page into the high order bit of its 8-bit, shifting the other bits right
over 1 bit position, discarding the low- order bit. These 8 bits shift registers contain the
history of page use for the last eight time periods.
If we interpret these 8-bits as unsigned integers, the page with the lowest number is the
LRU page, and it can be replaced.
ii.Second-Chance Algorithm-
We could keep a counter of the number of references that have been made to each page,
and develop the following two schemes.
LFU page replacement algorithm: The least frequently used (LFU) page- replacement
algorithm requires that the page with the smallest count be replaced. The reason for this
selection is that an actively used page should have a large reference count. ii. MFU page-
replacement algorithm: The most frequently used (MFU) page replacement algorithm
is based on the argument that the page with the largest count be replaced.
LECTURE-41
ALLOCATION OF FRAMES
When a page fault occurs, there is a free frame available to store new page into a frame.
While the page swap is taking place, a replacement can be selected, which is written to
the disk as the user process continues to execute. The operating system allocate all its
buffer and table space from the free-frame list for new page.
Two major allocation Algorithm/schemes.
1. Equal allocation
2. Proportional allocation
1. Equal allocation: The easiest way to split m frames among n processes is to give
everyone an equal share, m/n frames. This scheme is called equal allocation.
S= ∑ Si
Then,
if the total number of available frames is m, we allocate ai frames to process pi, where ai
is approximately
ai=Si/ S x m.
THRASHING
The system spends most of its time shuttling pages between main memory and secondary memory
due to frequent page faults. This behavior is known as thrashing.
A process is thrashing if it is spending more time paging than executing. This leads to: low CPU
utilization and the operating system thinks that it needs to increase the degree of multiprogramming.
Thrashing is when the page fault and swapping happens very frequently at a higher rate, and then
the operating system has to spend more time swapping these pages. This state in the operating
system is known as thrashing. Because of thrashing, the CPU utilization is going to be reduced or
negligible.
Fig:4.20 (Thrashing)
LECTURE 42
Cache Memory
Cache Memory is a special very high-speed memory. The cache is a smaller and faster
memory that stores copies of the data from frequently used main memory locations. There are
various different independent caches in a CPU, which store instructions and data. The most
important use of cache memory is that it is used to reduce the average time to access data from
the main memory.
Characteristics of Cache Memory
Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU. Cache Memory holds frequently requested data and instructions so that they are
Level 1 or Register- It is a type of memory in which data is stored and accepted that are
immediately stored in the CPU. The most commonly used register is Accumulator, Program
counter, Address Register, etc.
Level 2 or Cache memory- It is the fastest memory that has faster access time where data is
temporarily stored for faster access.
Level 3 or Main Memory- It is the memory on which the computer works currently. It is small
in size and once power is off data no longer stays in this memory.
Level 4 or Secondary Memory- It is external memory that is not as fast as the main memory
but data stays permanently in this memory.
Cache Performance
When the processor needs to read or write a location in the main memory, it first checks for a
corresponding entry in the cache. If the processor finds that the memory location is in the cache,
a Cache Hit has occurred and data is read from the cache. If the processor does not find the
memory location in the cache, a cache miss has occurred. For a cache miss, the cache allocates a
new entry and copies in data from the main memory, and then the request is fulfilled from the
contents of the cache. The performance of cache memory is frequently measured in terms of a
quantity called Hit ratio.
Cache Mapping
There are three different types of mapping used for the purpose of cache memory, which is as
follows:
Direct Mapping
Associative Mapping
Set-Associative Mapping
1. Direct Mapping
Fig:4.22
In case of loops in a program control processing unit repeatedly refers to the set of instructions
that constitute the loop.
In case of subroutine calls, every time the set of instructions are fetched from memory.
References to data items also get localized that means same data item is referenced again and
again.
Cache Operation-
IMPORTANT QUESTIONS
Q.1 What are the memory management requirements?
Q.2 Explain static partitioned allocation with partition sizes 300,150, 100, 200, 20. Assuming first fit
method indicate the memory status after memory request for sizes 80, 180, 280, 380, 30.
Q.7 What is demand paging? Explain it with address translation mechanism used.
Q.9 How many page faults would occur for the following replacement algorithm, assuming four and six
frames respectively?
LECTURE 43
What is the need for I/O Management?
I/O Devices
One of the important jobs of an Operating System is to manage various I/O devices including
mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-mapped screen,
LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers etc.An
I/O system is required to take an application I/O request and send it to the physical device, then
take whatever response comes back from the device and send it to the application. I/O devices
can be divided into two categories
● Block devices − A block device is one with which the driver communicates by sending entire
blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc.
● Character devices − A character device is one with which the driver communicates by
sending and receiving single characters (bytes, octets). For example, serial ports, parallel
ports, sounds cards etc.
Device Controllers
Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.
The Device Controller works like an interface between a device and a device driver. I/O units
(Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic
component where electronic component is called the device controller.
There is always a device controller and a device driver for each device to communicate with the
Operating Systems. A device controller may be able to handle multiple devices. As an interface
its main task is to convert serial bit stream to block of bytes, perform error correction as
necessary.
Any device connected to the computer is connected by a plug and socket, and the socket is
connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.
A computer must have a way of detecting the arrival of any type of input. There are two ways
that this can happen, known as polling and interrupts. Both of these techniques allow the
processor to deal with events that can happen at any time and that are not related to the process it
is currently running.
LECTURE 44
I/O Subsystems
Kernel I/O Subsystem in Operating System
The kernel provides many services related to I/O. Several services such as scheduling, caching,
spooling, device reservation, and error handling – are provided by the kernel’s I/O subsystem
built on the hardware and device-driver infrastructure. The I/O subsystem is also responsible for
protecting itself from errant processes and malicious users.
1. I/O Scheduling –
To schedule a set of I/O requests means to determine a good order in which to execute
them. The order in which the application issues the system call is the best choice.
Scheduling can improve the overall performance of the system, can share device access
permission fairly to all the processes, and reduce the average waiting time, response time,
and turnaround time for I/O to complete. OS developers implement schedules by
maintaining a wait queue of the request for each device. When an application issues a
blocking I/O system call, The request is placed in the queue for that device. The I/O
scheduler rearranges the order to improve the efficiency of the system.
2. Buffering –
3. Caching –
A cache is a region of fast memory that holds a copy of data. Access to the cached copy
is much easier than the original file. For instance, the instruction of the currently running
process is stored on the disk, cached in physical memory, and copied again in the CPU’s
secondary and primary cache.
The main difference between a buffer and a cache is that a buffer may hold only the
existing copy of a data item, while a cache, by definition, holds a copy on faster storage
of an item that resides elsewhere.
4. Spooling and Device Reservation –
A spool is a buffer that holds the output of a device, such as a printer that cannot accept
interleaved data streams. Although a printer can serve only one job at a time, several
applications may wish to print their output concurrently, without having their output
mixes together.
The OS solves this problem by preventing all output from continuing to the printer. The
output of all applications is spooled in a separate disk file. When an application finishes
printing then the spooling system queues the corresponding spool file for output to the
printer.
5. Error Handling –
An OS that uses protected memory can guard against many kinds of hardware and
application errors so that a complete system failure is not the usual result of each minor
mechanical glitch, Devices, and I/O transfers can fail in many ways, either for transient
reasons, as when a network becomes overloaded or for permanent reasons, as when a disk
controller becomes defective.
Error Handling Strategies: Ensuring robust error handling is a critical aspect of the
Kernel I/O Subsystem to maintain the stability and reliability of the operating system.
The strategies employed for error handling involve mechanisms for detecting, reporting,
and recovering from I/O errors. Below are key components of error handling strategies
within the Kernel I/O Subsystem:
User Alerts: Providing alerts to users, either through the user interface or system
notifications, can prompt immediate attention to potential issues.
Automated Notifications: Implementing automated notification systems, such as emails
or messages, to inform system administrators about critical errors for proactive system
management.
6. I/O Protection –
Errors and the issue of protection are closely related. A user process may attempt to issue illegal
I/O instructions to disrupt the normal function of a system. We can use the various mechanisms
to ensure that such disruption cannot take place in the system.
A buffer is a memory area that stores data being transferred between two devices or between a
device and an application.
Uses of I/O Buffering:
● Buffering is done to deal effectively with a speed mismatch between the producer and
consumer of the data stream.
● A buffer is produced in main memory to heap up the bytes received from modem.
● After receiving the data in the buffer, the data gets transferred to the disk from the buffer in a
single operation.
● This process of data transfer is not instantaneous, therefore the modem needs another buffer
in order to store additional incoming data.
● When the first buffer gets filled, then it is requested to transfer the data to disk.
● The modem then starts filling the additional incoming data in the second buffer while the
data in the first buffer gets transferred to disk.
● When both the buffers complete their tasks, then the modem switches back to the first buffer
while the data from the second buffer gets transferred to the disk.
● The use of two buffers disintegrates the producer and the consumer of the data, thus
minimizing the time requirements between them.
● Buffering also provides variations for devices that have different data transfer sizes.
Types of various I/O buffering techniques :
1. Single buffer: A buffer is provided by the Operating system to the system portion of the main
memory.
Block oriented device –
● System buffer takes the input.
● After taking the input, the block gets transferred to the user space by the process and then
the process requests for another block.
● Two blocks works simultaneously, when one block of data is processed by the user
process, the next block is being read in.
● OS can swap the processes.
● OS can record the data of system buffer to user processes.
Stream oriented –
● Line- at a time I/O, the user process need not be suspended for input or output, unless the
process runs ahead of the double buffer.
● Byte- at a time operations, a double buffer offers no advantage over a single buffer of
twice the length.
3. Circular buffer:
LECTURE 46
Disk Storage and Disk Scheduling
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O Scheduling.
Importance of Disk Scheduling in Operating System
● Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus other I/O requests need to wait in the waiting
queue and need to be scheduled.
● Two or more requests may be far from each other so this can result in greater disk arm
movement.
● Hard drives are one of the slowest parts of the computer system and thus need to be accessed
in an efficient manner.
● Disk Scheduling Algorithms
● FCFS (First Come First Serve)
● SSTF (Shortest Seek Time First)
● SCAN (Elevator Algorithm)
● C-SCAN (Circular SCAN)
● LOOK
● Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the
data is to be read or written. So the disk scheduling algorithm that gives a minimum average
seek time is better.
● Rotational Latency: Rotational Latency is the time taken by the desired sector of the disk to
rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
● Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and the number of bytes to be transferred.
● Disk Access Time:
● Disk Response Time: Response Time is the average time spent by a request waiting to
perform its I/O operation. The average Response time is the response time of all
requests. Variance Response Time is the measure of how individual requests are serviced
with respect to average response time. So the disk scheduling algorithm that gives minimum
variance response time is better.
FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are addressed in
the order they arrive in the disk queue. Let us understand this with the help of an example.
Advantages of FCFS
Here are some of the advantages of First Come First Serve.
● Every request gets a fair chance
● No indefinite postponement
Disadvantages of FCFS
Here are some of the disadvantages of First Come First Serve.
● Does not try to optimize seek time
● May not provide the best possible service
SCAN Algorithm
Suppose the requests to be addressed are- 82, 170, 43, 140, 24, 16, 190. And the Read/Write arm
is at 50, and it is also given that the disk arm should move “towards the larger value”.
Therefore, the total overhead movement (total distance covered by the disk arm) is calculated as
= (199-50) + (199-16) = 332
LECTURE 48
C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So, the
Example:
Circular SCAN
LOOK Algorithm
C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the CSCAN
disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the
last request to be serviced in front of the head and then from there goes to the other end’s last
request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the
end of the disk.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm
is at 50, and it is also given that the disk arm should move “towards the larger value”
So, the total overhead movement (total distance covered by the disk arm) is calculated as
= (190-50) + (190-16) + (43-16) = 341
LECTURE 49
RAID
RAID (Redundant Arrays of Independent Disks)
RAID is a technique that makes use of a combination of multiple disks instead of using a single
disk for increased performance, data redundancy, or both. The term was coined by David
Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987.
1. RAID-0 (Stripping)
2. RAID-1 (Mirroring)
1. RAID-0 (Stripping)
● Blocks are “stripped” across disks.
RAID-0
Raid-0
Evaluation
● Reliability: 0
There is no duplication of data. Hence, a block once lost cannot be recovered.
● Capacity: N*B
The entire space is being used to store data. Since there is no duplication, N disks each
having B blocks are fully utilized.
Advantages
1. It is easy to implement.
2. It utilizes the storage capacity in a better way.
Disadvantages
1. A single drive loss can result in the complete failure of the system.
2. Not a good choice for a critical system.
2. RAID-1 (Mirroring)
● More than one copy of each block is stored in a separate disk. Thus, every block has two (or
more) copies, lying on different disks.
Advantages
1. In case of Error Correction, it uses hamming code.
2. It uses one designated drive to store parity.
Disadvantages
1. It has a complex structure and high cost due to extra drive.
2. It requires an extra drive for error detection.
Raid-3
● Here Disk 3 contains the Parity bits for Disk 0, Disk 1, and Disk 2. If data loss occurs, we
can construct it with Disk 3.
Advantages
1. Data can be transferred in bulk.
2. Data can be accessed in parallel.
Disadvantages
1. It requires an additional drive for parity.
2. In the case of small-size files, it performs slowly.
Raid-4
Raid-4
● Assume that in the above figure, C3 is lost due to some disk failure. Then, we can
recomputed the data bit stored in C3 by looking at the values of all the other columns and the
parity bit. This allows us to recover lost data.
Evaluation
● Reliability: 1
RAID-4 allows recovery of at most 1 disk failure (because of the way parity works). If more
than one disk fails, there is no way to recover the data.
Raid-5
Advantages
1. Data can be reconstructed using parity bits.
2. It makes the performance better.
Raid-6
Advantages
1. Very high data Accessibility.
2. Fast read data transactions.
Disadvantages
1. Due to double parity, it has slow write data transactions.
2. Extra space is required.
Advantages of RAID
● Data redundancy: By keeping numerous copies of the data on many disks, RAID can shield
data from disk failures.
● Performance enhancement: RAID can enhance performance by distributing data over
several drives, enabling the simultaneous execution of several read/write operations.
● Scalability: RAID is scalable, therefore by adding more disks to the array, the storage
capacity may be expanded.
● Versatility: RAID is applicable to a wide range of devices, such as workstations, servers,
and personal PCs
● Cost: RAID implementation can be costly, particularly for arrays with large capacities.
● Complexity: The setup and management of RAID might be challenging.
● Decreased performance: The parity calculations necessary for some RAID configurations,
including RAID 5 and RAID 6, may result in a decrease in speed.
● Single point of failure: RAID is not a comprehensive backup solution, while offering data
redundancy. The array’s whole contents could be lost if the RAID controller malfunctions.
LECTURE 50
File System in Operating System
A file system is a collection of files and directories used by an operating system to organize the
storage of files and to provide a pathway for users to access those files. A file system is a
software layer that manages files and folders on an electronic storage device, such as a hard disk
or flash memory.
A computer file is defined as a medium used for saving and managing data in the computer
system. The data stored in the computer system is completely in digital format, although there
can be various types of files that help us to store the data.
What is a File System?
A file system is a method an operating system uses to store, organize, and manage files and
directories on a storage device. Some common types of file systems include:
1. FAT (File Allocation Table): An older file system used by older versions of Windows and
other operating systems.
2. NTFS (New Technology File System): A modern file system used by Windows. It supports
features such as file and folder permissions, compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and Unix-based
operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS
devices.
6. Delete operation:
Deleting the file will not only delete all the data stored inside the file it is also used so that disk
space occupied by it is freed. In order to delete the specified file the directory is searched. When
the directory entry is located, all the associated file space and the directory entry is released.
7. Truncate operation:
Truncating is simply deleting the file except deleting attributes. The file is not completely
deleted although the information stored inside the file gets replaced.
8. Close operation:
When the processing of the file is complete, it should be closed so that all the changes made
permanent and all the resources occupied should be released. On closing it deallocates all the
internal descriptors that were created when the file was opened.
9. Append operation:
This operation adds data to the end of the file.
10. Rename operation:
This operation is used to rename the existing file.
FILE ORGANIZATION AND ACCESS MECHANISM
File Order of records Records can be deleted Access mode
organization or replaced?
Sequential Order in which they A record cannot be Sequential only
were written deleted, but its space can
be reused for a same-
length record.
Line-sequential Order in which they No Sequential only
were written
Indexed Collating sequence Yes Sequential, random,
by key field or dynamic
Relative Order of relative Yes Sequential, random,
record numbers or dynamic
File organization and access mode
Fig:5.6
Most of the operating systems access the file sequentially. In other words, we can say that most
of the files need to be accessed sequentially by the operating system.
In sequential access, the OS read the file word by word. A pointer is maintained which initially
points to the base address of the file. If the user wants to read first word of the file then the
pointer provides that word to the user and increases its value by 1 word. This process continues
till the end of the file.
Modern word systems do provide the concept of direct access and indexed access but the most
used method is sequential access due to the fact that most of the files such as text files, audio
files, video files, etc need to be sequentially accessed.
2. Direct Access
The Direct Access is mostly required in the case of database systems. In most of the cases, we
need filtered information from the database. The sequential access can be very slow and
inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the record we needed is
stored in 10th block. In that case, the sequential access will not be implemented because it will
traverse all the blocks in order to access the needed record.
Fig:5.7
3. Indexed Access
If a file can be sorted on any of the filed then an index can be assigned to a group of certain
records. However, A particular record can be accessed by its index. The index is nothing but the
address of a record in the file.
In index accessing, searching in a large database became very quick and easy but we need to
have some extra space in the memory to store the index value.
File Directories
Directory Structure in OS (Operating System)
What is a directory?
Directory can be defined as the listing of the related files on the disk. The directory may store
some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes. The partitions are also called volumes or
mini disks.
Each partition must have at least one directory in which, all the files of the partition can be listed.
A directory entry is maintained for each file in the directory which stores all the information
related to that file.
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
LECTURE 51
Single Level Directory
The simplest method is to have one big list of all the files on the disk. The entire system will
contain only one directory which is supposed to mention all the files present in the file system.
The directory contains one entry per each file present on the file system.
Every Operating System maintains a variable as PWD which contains the present directory name
(present user name) so that the searching can be done appropriately.
Advantages:
● The main advantage is there can be more than two files with same name, and would be very
helpful if there are multiple users.
● A security would be there which would prevent user to access other user’s files.
● Searching of the files becomes very easy in this directory structure.
Disadvantages:
● As there is advantage of security, there is also disadvantage that the user cannot share the file
with the other users.
● Unlike the advantage users can create their own files, users don’t have the ability to create
subdirectories.
● Scalability is not possible because one use can’t group the same types of files together.
Tree Structured Directory
Advantages:
● This directory structure allows subdirectories inside a directory.
● The searching is easier.
● File sorting of important and unimportant becomes easier.
● This directory is more scalable than the other two directory structures explained.
Fig:5.11
Advantages:
Disadvantages:
● Because of the complex structure it has, it is difficult to implement this directory structure.
● The user must be very cautious to edit or even deletion of file as the file is accessed by
multiple users.
● If we need to delete the file, then we need to delete all the references of the file in order to
delete it permanently.
LECTURE 52
File Sharing in OS
File sharing in an Operating System (OS) denotes how information and files are shared between
different users, computers, or devices on a network; and files are units of data that are stored in a
computer in the form of documents/images/videos or any others types of information needed.
For Example: Suppose letting your computer talk to another computer and exchange pictures,
documents, or any useful data. This is generally useful when one wants to work on a project with
others, send files to friends, or simply shift stuff to another device. Our OS provides ways to do
this like email attachments, cloud services, etc. to make the sharing process easier and more
secure. Now, file sharing is nothing like a magical bridge between Computer A to Computer B
allowing them to swap some files with each other.
● Folder/Directory: It is basically like a container for all of our files on a computer. The folder
can contain files and even other folders maintaining like hierarchical structure for organizing
data.
● Networking: It is involved in connecting computers or devices where we need to share the
resources. Networks can be local (LAN) or global (Internet).
● IP Address: It is numerical data given to every connected device on the network
● Protocol: It is given as the set of rules which drives the communication between devices on a
network. In the context of file sharing, protocols define how files are transferred between
computers. File Transfer Protocol (FTP): FTP is a standard network protocol used to transfer
files between a client and a server on a computer network.
These all file sharing methods serves different purpose and needs according to the requirements
and flexibility of the users based on the operating system.
A file is a collection of related information. The file system resides on secondary storage and
provides efficient and convenient access to the disk by allowing data to be stored, located, and
retrieved.
File system implementation in an operating system refers to how the file system manages the
storage and retrieval of data on a physical storage device such as a hard drive, solid-state drive,
or flash drive. The file system implementation includes several components, including:
1. File System Structure: The file system structure refers to how the files and directories are
organized and stored on the physical storage device. This includes the layout of file systems
data structures such as the directory structure, file allocation table, and in odes.
2. File Allocation: The file allocation mechanism determines how files are allocated on the
storage device. This can include allocation techniques such as contiguous allocation, linked
allocation, indexed allocation, or a combination of these techniques.
Fig:5.17
1. I/O Control level – Device drivers act as an interface between devices and OS, they help to
transfer data between disk and main memory. It takes block number as input and as output, it
gives low-level hardware-specific instruction.
2. Basic file system – It Issues general commands to the device driver to read and write physical
blocks on disk. It manages the memory buffers and caches. A block in the buffer can hold the
contents of the disk block and the cache stores frequently used file system metadata.
If we access many files at the same time then it results in low performance. We can implement a
file system by using two types of data structures:
1. Boot Control Block – It is usually the first block of volume and it contains information
needed to boot an operating system. In UNIX it is called the boot block and in NTFS it is
called the partition boot sector.
2. Volume Control Block – It has information about a particular partition ex:- free block count,
block size and block pointers, etc. In UNIX it is called superblock and in NTFS it is stored in
the master file table.
3. Directory Structure – They store file names and associated inode numbers. In UNIX,
includes file names and associated file names and in NTFS, it is stored in the master file
table.
4. Per-File FCB – It contains details about files and it has a unique identifier number to allow
association with the directory entry. In NTFS it is stored in the master file table.
5. Mount Table – It contains information about each mounted volume.
6. Directory-Structure cache – This cache holds the directory information of recently accessed
directories.
7. System-wide open-file table – It contains the copy of the FCB of each open file.
8. Per-process open-file table – It contains information opened by that particular process and it
maps with the appropriate system-wide open-file.
9. Linear List – It maintains a linear list of filenames with pointers to the data blocks. It is time-
consuming also. To create a new file, we must first search the directory to be sure that no
existing file has the same name then we add a file at the end of the directory. To delete a file,
we search the directory for the named file and release the space. To reuse the directory entry
either we can mark the entry as unused or we can attach it to a list of free directories.
10. Hash Table – The hash table takes a value computed from the file name and returns a pointer
to the file. It decreases the directory search time. The insertion and deletion process of files is
easy. The major difficulty is hash tables are its generally fixed size and hash tables are
dependent on the hash function of that size.
Implementation Issues
Management of disc space: To prevent space wastage and to guarantee that files can always be
stored in contiguous blocks, file systems must manage disc space effectively. Free space
management, fragmentation prevention, and garbage collection are methods for managing disc
space.
File protection in an operating system is the process of securing files from unauthorized access,
alteration, or deletion. It is critical for data security and ensures that sensitive information
remains confidential and secure. Operating systems provide various mechanisms and techniques
such as file permissions, encryption, access control lists, auditing, and physical file security to
protect files. Proper file protection involves user authentication, authorization, access control,
encryption, and auditing. Ongoing updates and patches are also necessary to prevent security
breaches. File protection in an operating system is essential to maintain data security and
minimize the risk of data breaches and other security incidents.
File protection in an operating system refers to the various mechanisms and techniques used to
secure files from unauthorized access, alteration, or deletion. It involves controlling access to
files, ensuring their security and confidentiality, and preventing data breaches and other security
incidents. Operating systems provide several file protection features, including file permissions,
encryption, access control lists, auditing, and physical file security. These measures allow
administrators to manage access to files, determine who can access them, what actions can be
performed on them, and how they are stored and backed up. Proper file protection requires
ongoing updates and patches to fix vulnerabilities and prevent security breaches. It is crucial for
data security in the digital age where cyber threats are prevalent. By implementing file protection
measures, organizations can safeguard their files, maintain data confidentiality, and minimize the
risk of data breaches and other security incidents.
File protection is an essential component of modern operating systems, ensuring that files are
secured from unauthorized access, alteration, or deletion. In this context, there are several types
of file protection mechanisms used in operating systems to provide robust data security.
● File Permissions − File permissions are a basic form of file protection that controls access
to files by setting permissions for users and groups. File permissions allow the system
administrator to assign specific access rights to users and groups, which can include read,
write, and execute privileges. These access rights can be assigned at the file or directory
level, allowing users and groups to access specific files or directories as needed. File
permissions can be modified by the system administrator at any time to adjust access
privileges, which helps to prevent unauthorized access.
● Encryption − Encryption is the process of converting plain text into ciphertext to protect
files from unauthorized access. Encrypted files can only be accessed by authorized users
who have the correct encryption key to decrypt them. Encryption is widely used to secure
Overall, these types of file protection mechanisms are essential for ensuring data security and
minimizing the risk of data breaches and other security incidents in an operating system. The
choice of file protection mechanisms will depend on the specific requirements of the
organization, as well as the sensitivity and volume of the data being protected. However, a
combination of these file protection mechanisms can provide comprehensive protection against
various types of threats and vulnerabilities.
File protection is an important aspect of modern operating systems that ensures data security and
integrity by preventing unauthorized access, alteration, or deletion of files. There are several
advantages of file protection mechanisms in an operating system, including −
● Data Security − File protection mechanisms such as encryption, access control lists, and
file permissions provide robust data security by preventing unauthorized access to files.
These mechanisms ensure that only authorized users can access files, which helps to
prevent data breaches and other security incidents. Data security is critical for
organizations that handle sensitive data such as personal data, financial information, and
intellectual property.
● Compliance − File protection mechanisms are essential for compliance with regulatory
requirements such as GDPR, HIPAA, and PCI-DSS. These regulations require
There are also some potential disadvantages of file protection in an operating system, including −
● Overhead − some file protection mechanisms such as encryption, access control lists, and
auditing can add overhead to system performance. This can impact system resources and
slow down file access and processing times.
● Complexity − File protection mechanisms can be complex and require specialized
knowledge to implement and manage. This can lead to errors and misconfigurations that
compromise data security.
● Compatibility Issues − Some file protection mechanisms may not be compatible with all
types of files or applications, leading to compatibility issues and limitations in file usage.
● Cost − Implementing robust file protection mechanisms can be expensive, especially for
small organizations with limited budgets. This can make it difficult to achieve full data
protection.
There are various head-to-head comparisons between the security and protection in the operating
system. Some comparisons of security and protection are as follows:
1. Explain the term RAID and its characteristics. Also, explain various RAID levels with
their advantages and disadvantages
2. Explain the concept of file system management. Also, explain various file allocation and
file access mechanisms in details.
3. Suppose the following disk request sequence (track numbers) for a disk with 100 tracks
is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W
head is on track 49. Calculate the net head movement using:
(i) SSTF
(ii) SCAN
(iii) CSCAN
(iv) LOOK
4. Explain the followings: (i) Buffering
(ii) Polling
(iii) Direct Memory Access (DMA)
5. Explain tree level directory structure.Explain various operations associated with a file.
6. Difference between Directory and File. Explain File organization and Access mechanism.
7. What do you mean by caching, spooling and error handling, explain in detail. Explain
FCFS, SCAN & CSCAN scheduling with eg.
8. Discuss the Linked, Contiguous and Index and multilevel Indexing file allocation
schemes. Which allocation scheme will minimize the amount of space required in
directory structure and why?Write short notes on :
i) I/O Buffering
ii) Disk storage and scheduling
9. Define seek time and latency time.
10. Define SCAN and C-SCAN scheduling algorithms. A hard disk having 2000 cylinders,
numbered from 0 to 1999. The drive is currently serving the request at cylinder 143, and
the previous request was at cylinder 125. The status of the queue is as follows
86,1470,913,1774,948,1509,1022,1750,130. What is the total distance (in cylinders) that
the disk arm moves to satisfy the entire pending request for each of the following disk-
scheduling algorithms?(i) SSTF
(ii) FCFS
11. What are files and explain the access methods for files.
12. Write are files and explain the access methods for files.
13. File system protection and security and
(i) Linked File allocation methods