0% found this document useful (0 votes)
34 views49 pages

OS Notes

Uploaded by

Aishwarya kinge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views49 pages

OS Notes

Uploaded by

Aishwarya kinge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Unit 1

Overview of Operating System

Operating System

An operating system acts as an interface between the software and different parts of the
computer or the computer hardware. The operating system is designed in such a way that it can
manage the overall resources and operations of the computer.

Functions of the Operating System


 Resource Management: The operating system manages and allocates memory, CPU time,
and other hardware resources among the various programs and processes running on the
computer.
 Process Management: The operating system is responsible for starting, stopping, and
managing processes and programs. It also controls the scheduling of processes and
allocates resources to them.
 Memory Management: The operating system manages the computer’s primary memory
and provides mechanisms for optimizing memory usage.
 Security: The operating system provides a secure environment for the user, applications,
and data by implementing security policies and mechanisms such as access controls and
encryption.
 Job Accounting: It keeps track of time and resources used by various jobs or users.
 File Management: The operating system is responsible for organizing and managing the
file system, including the creation, deletion, and manipulation of files and directories.

Page 1
 Device Management: The operating system manages input/output devices such as printers,
keyboards, mice, and displays. It provides the necessary drivers and interfaces to enable
communication between the devices and the computer.
 Networking: The operating system provides networking capabilities such as establishing
and managing network connections, handling network protocols, and sharing resources
such as printers and files over a network.
 User Interface: The operating system provides a user interface that enables users to
interact with the computer system. This can be a Graphical User Interface (GUI), a
Command-Line Interface (CLI), or a combination of both.
 Backup and Recovery: The operating system provides mechanisms for backing up data
and recovering it in case of system failures, errors, or disasters.
 Performance Monitoring: The operating system provides tools for monitoring and
optimizing system performance, including identifying bottlenecks, optimizing resource
usage, and analyzing system logs and metrics.
 Time-Sharing: The operating system enables multiple users to share a computer system
and its resources simultaneously by providing time-sharing mechanisms that allocate
resources fairly and efficiently.
 System Calls: The operating system provides a set of system calls that enable applications
to interact with the operating system and access its resources. System calls provide a
standardized interface between applications and the operating system, enabling portability
and compatibility across different hardware and software platforms.
 Error-detecting: These contain methods that include the production of dumps, traces,
error messages, and other debugging and error-detecting methods.

Types of Operating Systems


 Batch Operating System: A Batch Operating System is a type of operating system that
does not interact with the computer directly. There is an operator who takes similar jobs
having the same requirements and groups them into batches.
 Time-sharing Operating System: Time-sharing Operating System is a type of operating
system that allows many users to share computer resources (maximum utilization of the
resources).
 Distributed Operating System: Distributed Operating System is a type of operating
system that manages a group of different computers and makes appear to be a single
computer. These operating systems are designed to operate on a network of computers.
They allow multiple users to access shared resources and communicate with each other
over the network. Examples include Microsoft Windows Server and various distributions
of Linux designed for servers.
 Network Operating System: Network Operating System is a type of operating system that
runs on a server and provides the capability to manage data, users, groups, security,
applications, and other networking functions.
 Real-time Operating System: Real-time Operating System is a type of operating system
that serves a real-time system and the time interval required to process and respond to
inputs is very small. These operating systems are designed to respond to events in real

Page 2
time. They are used in applications that require quick and deterministic responses, such as
embedded systems, industrial control systems, and robotics.
 Multiprocessing Operating System: Multiprocessor Operating Systems are used in
operating systems to boost the performance of multiple CPUs within a single computer
system. Multiple CPUs are linked together so that a job can be divided and executed more
quickly.

Difference between Linux and Windows

S.NO Linux Windows

While windows are the not the open source


1. Linux is a open source operating system.
operating system.

2. Linux is free of cost. While it is costly.

3. It’s file name case-sensitive. While it’s file name is case-insensitive.

4. In linux, monolithic kernel is used. While in this, hybrid kernel is used.

Linux is more efficient in comparison of


5. While windows are less efficient.
windows.

There is forward slash is used for While there is back slash is used for
6.
Separating the directories. Separating the directories.

Linux provides more security than


7. While it provides less security than linux.
windows.

Linux is widely used in hacking purpose While windows does not provide much
8.
based systems. efficiency in hacking.

There are 3 types of user account – There are 4 types of user account –
9. (1) Regular , (2) Root , (3) Service (1) Administrator , (2) Standard , (3) Child
account , (4) Guest

Root user is the super user and has all Administrator user has all administrative
10.
administrative privileges. privileges of computers.

Page 3
CUI vs GUI

Features CUI GUI

Interaction The user interacts with the The user interacts with the system
computer using commands using Graphics like icons, images.
like text.

Navigation Navigation is not easy. Navigation is easy to use.

Usage Usage is easy to use. Usage is difficult, requires expertise.

Speed It has high speed. It has a low speed.

Memory It has a low memory It has a high memory requirement.


Requirement requirement.

Peripherals Users interact with the Users interact with the computer
used computer system by typing system using a graphical interface,
commands into the keyboard. which includes menus and mouse
clicks.

Precision It has high precision. It has low precision.

Flexibility It has a little flexible user It has a highly flexible user interface.
interface.

Customize It is not easily changeable. It has highly customizable.

Page 4
Unit 2

Services and Components of OS

Operating System Services

Operating system is a software that acts as an intermediary between the user and computer
hardware. It is a program with the help of which we are able to run various applications. It is
the one program that is running all the time. Every computer must have an operating system to
smoothly execute other programs

Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the
program into the memory after which it is executed. The order in which they are executed
depends on the CPU Scheduling Algorithms. A few are FCFS, SJF, etc. When the program is
in execution, the Operating System also handles deadlock i.e. no two processes come for
execution at the same time. The Operating System is responsible for the smooth execution of
both user and system programs. The Operating System utilizes various resources available for
the efficient running of all types of functionalities.
Input Output Operations
Operating System manages the input-output operations and establishes communication
between the user and device drivers. Device drivers are software that is associated with
hardware that is being managed by the OS so that the sync between the devices works
properly. It also provides access to input-output devices to a program when needed.
Communication between Processes
The Operating system manages the communication between processes. Communication
between processes includes data transfer among them. If the processes are not on the same
computer but connected through a computer network, then also their communication is
managed by the Operating System itself.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is the
operating system that grants access. These permissions include read-only, read-write, etc. It
also provides a platform for the user to create, and delete files. The Operating System is
responsible for making decisions regarding the storage of all types of data or files, i.e, floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data should be
manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of player . The team manager (OS) decide whether the upcoming player will be
in playing 11 ,playing 15 or will not be included in team , based on his performance . In the
same way, OS first check whether the upcoming program fulfil all requirement to get memory
space or not ,if all things good, it checks how much memory space will be sufficient for
program and then load the program into memory at certain location. And thus , it prevents
program from using unnecessary memory.

Page 5
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking(execution) is really happen and chef as the (OS) who uses kitchen-
stove(CPU) to cook different dishes(program). The chef(OS) has to cook different
dishes(programs) so he ensure that any particular dish(program) does not take long
time(unnecessary time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
Security and Privacy
 Security : OS keep our computer safe from an unauthorized user by adding security layer
to it. Basically, Security is nothing but just a layer of protection which protect computer
from bad guys like viruses and hackers. OS provide us defenses like firewalls and anti-
virus software and ensure good safety of computer and personal information.
 Privacy : OS give us facility to keep our essential information hidden like having a lock on
our door, where only you can enter and other are not allowed . Basically , it respect our
secrets and provide us facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that
manages resource sharing. It also manages the CPU time among processes using CPU
Scheduling Algorithms. It also helps in the memory management of the system. It also controls
input-output devices. The OS also ensures the proper use of all the resources available by
deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The
command interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to the
internet, sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices,
etc. It also ensures that an error does not occur frequently and fixes the errors. It also prevents
the process from coming to a deadlock. It also looks for any type of error or bugs that can
occur while any task. The well-secured OS sometimes also acts as a countermeasure for
preventing any sort of breach of the Computer System from any external source and probably
handling them.

System Call

In computing, a system call is a programmatic way in which a computer program requests a


service from the kernel of the operating system it is executed on. A system call is a way for
programs to interact with the operating system. A computer program makes a system call
when it makes a request to the operating system’s kernel.

System call working:

Page 6
When a program executes a system call, it transitions from user mode to kernel mode, which is
a higher privileged mode. The transition is typically initiated by invoking a specific function or
interrupting instruction provided by the programming language or the operating system.

Once in kernel mode, the system call is handled by the operating system. The kernel performs
the requested operation on behalf of the program and returns the result. Afterward, control is
returned to the user-level program, which continues its execution.

Examples of a System Call in Windows and Unix


System calls for Windows and Unix come in many different forms. These are listed in the table
below as follows:
Process Windows Unix

CreateProcess() Fork()
Process Control ExitProcess() Exit()
WaitForSingleObject() Wait()

Open()
CreateFile()
File manipulation Read()
ReadFile()
Write()
WriteFile()
Close()

SetConsoleMode() Ioctl()
Device Management ReadConsole() Read()
WriteConsole() Write()

GetCurrentProcessID() Getpid()
Information Maintenance SetTimer() Alarm()
Sleep() Sleep()

CreatePipe() Pipe()
Communication CreateFileMapping() Shmget()
MapViewOfFile() Mmap()

SetFileSecurity() Chmod()
Protection InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

Page 7
System Calls Advantages
 Access to hardware resources: System calls allow programs to access hardware resources
such as disk drives, printers, and network devices.
 Memory management: System calls provide a way for programs to allocate and deallocate
memory, as well as access memory-mapped hardware devices.
 Process management: System calls allow programs to create and terminate processes, as
well as manage inter-process communication.
 Security: System calls provide a way for programs to access privileged resources, such as
the ability to modify system settings or perform operations that require administrative
permissions.
 Standardization: System calls provide a standardized interface for programs to interact
with the operating system, ensuring consistency and compatibility across different
hardware platforms and operating system versions.

Components of the Operating System

Process Management :

A process is a program in execution. It consists of the followings:


 Executable program
 Program’s data
 Stack and stack pointer
 Program counter and other CPU registers
 Details of opened files
A process can be suspended temporarily and the execution of another process can be taken up .
A suspended process can be restarted later. Before suspending a process, its details are saved in
a table called the process table so that it can be executed later on. An operating system
supports two system calls to manage processes Create and Kill –
 Create a system call used to create a new process.
 Kill system call used to delete an existing process.
A process can create a number of child processes. Processes can communicate among
themselves either using shared memory or by message-passing techniques. Two processes
running on two different computers can communicate by sending messages over a network.

Files Management :

Files are used for long-term storage. Files are used for both input and output. Every operating
system provides a file management service. This file management service can also be treated as
an abstraction as it hides the information about the disks from the user. The operating system
also provides a system call for file management. The system call for file management includes –
 File creation
 File deletion
 Read and Write operations

Page 8
Files are stored in a directory. System calls provide to put a file in a directory or to remove a file
from a directory. Files in the system are protected to maintain the privacy of the user. Below
shows the Hierarchical File Structure directory.

I/O Device Management :

The I/O device management component is an I/O manager that hides the details of hardware
devices and manages the main memory for devices using cache and spooling.
This component provides a buffer cache and general device driver code that allows the system
to manage the main memory and the hardware devices connected to it. It also provides and
manages custom drivers for particular hardware devices.
The purpose of the I/O system is to hide the details of hardware devices from the application
programmer.
An I/O device management component allows highly efficient resource utilization while
minimizing errors and making programming easy on the entire range of devices available in
their systems.

Secondary Storage Management :

Broadly, the secondary storage area is any space, where data is stored permanently and the user
can retrieve it easily.
Your computer’s hard drive is the primary location for your files and programs. Other spaces,
such as CD-ROM/DVD drives, flash memory cards, and networked devices, also provide
secondary storage for data on the computer.
The computer’s main memory (RAM) is a volatile storage device in which all programs reside,
it provides only temporary storage space for performing tasks. Secondary storage refers to the

Page 9
media devices other than RAM (e.g. CDs, DVDs, or hard disks) that provide additional space
for permanent storing of data and software programs which is also called non-volatile storage.

Main memory management :

Main memory is a flexible and volatile type of storage device. It is a large sequence of bytes
and addresses used to store volatile data.
Main memory is also called Random Access Memory (RAM), which is the fastest computer
storage available on PCs.
It is costly and low in terms of storage as compared to secondary storage devices. Whenever
computer programs are executed, it is temporarily stored in the main memory for execution.
Later, the user can permanently store the data or program in the secondary storage device.

Page 10
Unit 3

Process management

Process state diagram

1. New

A program which is going to be picked up by the OS into the main memory is called a new
process.

2. Ready

Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU
to be assigned. The OS picks the new processes from the secondary memory and put all of them
in the main memory.

The processes which are ready for the execution and reside in the main memory are called ready
state processes. There can be many processes present in the ready state.

3. Running

One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running
processes for a particular time will always be one. If we have n processors in the system then we
can have n processes running simultaneously.

4. Block or wait

Page 11
From the Running state, a process can make the transition to the block or wait state depending
upon the scheduling algorithm or the intrinsic behavior of the process.

When a process waits for a certain resource to be assigned or for the input from the user then the
OS move this process to the block or wait state and assigns the CPU to the other processes.

5. Completion or termination

When a process finishes its execution, it comes in the termination state. All the context of the
process (Process Control Block) will also be deleted the process will be terminated by the
Operating system.

Process Control Block (PCB)

Process State

This specifies the process state i.e. new, ready, running, waiting or terminated.

Process Number

This shows the number of the particular process.

Page 12
Program Counter

This contains the address of the next instruction that needs to be executed in the process.

Registers

This specifies the registers that are used by the process. They may include accumulators, index
registers, stack pointers, general purpose registers etc.

List of Open Files

These are the different files that are associated with the process

CPU Scheduling Information

The process priority, pointers to scheduling queues etc. is the CPU scheduling information that
is contained in the PCB. This may also include any other scheduling parameters.

Memory Management Information

The memory management information includes the page tables or the segment tables depending
on the memory system used. It also contains the value of the base registers, limit registers etc.

I/O Status Information

This information includes the list of I/O devices used by the process, the list of files etc.

Accounting information

The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of
the PCB accounting information.

Location of the Process Control Block

The process control block is kept in a memory area that is protected from the normal user access.
This is done because it contains important process information. Some of the operating systems
place the PCB at the beginning of the kernel stack for the process as it is a safe location.

Page 13
Process Scheduling in OS

Operating system uses various schedulers for the process scheduling.

1. Long term scheduler

Long term scheduler is also known as job scheduler. It chooses the processes from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.

Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long
term scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs
present in the pool.

If the job scheduler chooses more IO bound processes then all of the jobs may reside in the
blocked state all the time and the CPU will remain idle most of the time. This will reduce the
degree of Multiprogramming. Therefore, the Job of long term scheduler is very critical and may
affect the system for a very long time.

2. Short term scheduler

Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution.

A scheduling algorithm is used to select which job is going to be dispatched for the execution.
The Job of the short term scheduler can be very critical in the sense that if it selects job whose
CPU burst time is very high then all the jobs after that, will have to wait in the ready queue for a
very long time.

This problem is called starvation which may arise if the short term scheduler makes some
mistakes while selecting the job.

3. Medium term scheduler

Medium term scheduler takes care of the swapped out processes.If the running state processes
needs some IO time for the completion then there is a need to change its state from running to
waiting.

Medium term scheduler is used for this purpose. It removes the process from the running state to
make room for the other processes. Such processes are the swapped out processes and this
procedure is called swapping. The medium term scheduler is responsible for suspending and
resuming the processes.

It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.

Page 14
Process Queues

The Operating system manages various types of queues for each of the process states. The PCB
related to the process is also stored in the queue of the same state. If the Process is moved from
one state to another state then its PCB is also unlinked from the corresponding queue and added
to the other state queue in which the transition is made.

There are the following queues maintained by the Operating system.

1. Job Queue

In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the
primary memory.

2. Ready Queue

Ready queue is maintained in primary memory. The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the execution.

3. Waiting Queue

When the process needs some IO operation in order to complete its execution, OS changes the
state of the process from running to waiting. The context (PCB) associated with the process gets
stored on the waiting queue which will be used by the Processor when the process finishes the
IO.

Page 15
Inter Process Communication (IPC)

A process can be of two types:


 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-operating
process can be affected by other executing processes.

There are many situations when co-operative nature can be utilized for increasing
computational speed, convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and synchronize their
actions.

Processes can communicate with each other through both:

1. Shared Memory
2. Message passing

i) Shared Memory Method


Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces some items and the
Consumer consumes that item.
The two processes share a common space or memory location known as a buffer where the
item produced by the Producer is stored and from which the Consumer consumes the item if
needed.

Page 16
There are two versions of this problem: the first one is known as the unbounded buffer problem
in which the Producer can keep on producing items and there is no limit on the size of the
buffer, the second one is known as the bounded buffer problem in which the Producer can
produce up to a certain number of items before it starts waiting for Consumer to consume it..
ii) Messaging Passing Method
Now, We will start our discussion of the communication between processes via message
passing. In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other, they
proceed as follows:

 Establish a communication link (if a link already exists, no need to establish it again.)
 Start exchanging messages using basic primitives.

We need at least two primitives:


– send(message, destination) or send(message)
– receive(message, host) or receive(message)
Message Passing through Communication Link.

Direct and Indirect Communication link


Now, We will start our discussion about the methods of implementing communication links.
While implementing the link, there are some questions that need to be kept in mind like :

1. How are links established?


2. Can a link be associated with more than two processes?
3. How many links can there be between every pair of communicating processes?
4. What is the capacity of a link? Is the size of a message that the link can accommodate fixed
or variable?
5. Is a link unidirectional or bi-directional?

Synchronous and Asynchronous Message Passing:

A process that is blocked is one that is waiting for some event, such as a resource becoming
available or the completion of an I/O operation. IPC is possible between the processes on same
computer as well as on the processes running on different computer

i.e. in networked/distributed system. In both cases, the process may or may not be blocked
while sending a message or attempting to receive a message so message passing may be
blocking or non-blocking.

Blocking is considered synchronous and blocking send means the sender will be blocked
until the message is received by receiver.

Page 17
Similarly, blocking receive has the receiver block until a message is available. Non-blocking
is considered asynchronous and Non-blocking send has the sender sends the message and
continue.

Similarly, Non-blocking receive has the receiver receive a valid message or null.

Thread in Operating System

A thread is a single sequence stream within a process. Threads are also called lightweight
processes as they possess some of the properties of processes. Each thread belongs to exactly
one process. In an operating system that supports multithreading, the process can consist of
many threads. But threads can be effective only if the CPU is more than 1 otherwise two
threads have to context switch for that single CPU.

Types of Thread
 User Level Thread
 Kernel Level Thread

1. User Level Threads


User Level Thread is a type of thread that is not created using system calls. The kernel has no
work in the management of user-level threads. User-level threads can be easily implemented
by the user. In case when user-level threads are single-handed processes, kernel-level thread
manages them. Let’s look at the advantages and disadvantages of User-Level Thread.

Advantages
 Implementation of the User-Level Thread is easier than Kernel Level Thread.
 Context Switch Time is less in User Level Thread.

Page 18
 User-Level Thread is more efficient than Kernel-Level Thread.
 Because of the presence of only Program Counter, Register Set, and Stack Space, it has a
simple representation.

Disadvantages
 There is a lack of coordination between Thread and Kernel.
 Inc case of a page fault, the whole process can be blocked.

2. Kernel Level Threads


A kernel Level Thread is a type of thread that can recognize the Operating system easily.
Kernel Level Threads has its own thread table where it keeps track of the system. The
operating System Kernel helps in managing threads. Kernel Threads have somehow longer
context switching time. Kernel helps in the management of threads.

Advantages
 It has up-to-date information on all threads.
 Applications that block frequency are to be handled by the Kernel-Level Threads.
 Whenever any process requires more time to process, Kernel-Level Thread provides more
time to it.

Disadvantages
 Kernel-Level Thread is slower than User-Level Thread.
 Implementation of this type of thread is a little more complex than a user-level thread.

Multithreading in Operating System

The concept of multi-threading needs proper understanding of these two terms – a process
and a thread. A process is a program being executed. A process can be further divided into
independent units known as threads. A thread is like a small light-weight process within a
process. Or we can say a collection of threads is what is known as a process.

Page 19
Benefits of Multithreading:
 Multithreading can improve the performance and efficiency of a program by utilizing the
available CPU resources more effectively. Executing multiple threads concurrently, it can
take advantage of parallelism and reduce overall execution time.
 Multithreading can enhance responsiveness in applications that involve user interaction. By
separating time-consuming tasks from the main thread, the user interface can remain
responsive and not freeze or become unresponsive.
 Multithreading can enable better resource utilization. For example, in a server application,
multiple threads can handle incoming client requests simultaneously, allowing the server to
serve more clients concurrently.
 Multithreading can facilitate better code organization and modularity by dividing complex
tasks into smaller, manageable units of execution. Each thread can handle a specific part of
the task, making the code easier to understand and maintain.

Page 20
Unit 4

CPU Scheduling and Algorithms

CPU Scheduling criteria

CPU utilization – keep the CPU as busy as possible

▪ Throughput – # of processes that complete their execution per time unit

▪ Turnaround time – amount of time to execute a particular process

▪ Waiting time – amount of time a process has been waiting in the ready queue

▪ Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)

CPU and I/O Burst Cycle

CPU-I/O Burst Cycle: The success of CPU scheduling depends on an observed property of
processes:

 process execution consists of a cycle of CPU execution and I/O wait.

 Processes alternate between these two states.

Process execution begins with a CPU burst. That is followed by an I/O burst, which is followed
by another CPU burst, then another I/O burst and so on. Eventually, the final CPU burst ends
with a system request to terminate execution.

Page 21
Thus, any process would typically require both CPU time and I/O time. So, while it is using the
I/O resources, CPU remains idle and vice versa. To utilize the resources efficiently we can
schedule other processes to utilize out idle resources.

Preemptive Scheduling and Non-Preemptive Scheduling:

These are the two techniques to schedule the incoming processes:

 Non Pre-empting Scheduling: when the currently executing process gives up the CPU
voluntarily.

 Pre-emptive Scheduling: When the operating system decides to favour another process, pre-
empting the currently executing process.

Scheduling Algorithms in OS

First Come First Serve

First Come First Serve CPU Scheduling Algorithm shortly known as FCFS is the first algorithm
of CPU Process Scheduling Algorithm. In First Come First Serve Algorithm what we do is to
allow the process to execute in linear manner.

Page 22
This means that whichever process enters process enters the ready queue first is executed first.
This shows that First Come First Serve Algorithm follows First In First Out (FIFO) principle.

Advantages
1. In order to allocate processes, it uses the First In First Out queue.
2. The FCFS CPU Scheduling Process is straight forward and easy to implement.
3. In the FCFS situation pre emptive scheduling, there is no chance of process starving.
4. As there is no consideration of process priority, it is an equitable algorithm.

Disadvantages
o FCFS CPU Scheduling Algorithm has Long Waiting Time
o FCFS CPU Scheduling favors CPU over Input or Output operations
o In FCFS there is a chance of occurrence of Convoy Effect
o Because FCFS is so straight forward, it often isn't very effective. Extended waiting
periods go hand in hand with this. All other orders are left idle if the CPU is busy
processing one time-consuming order.

Shortest Job First (SJF) Scheduling

Till now, we were scheduling the processes according to their arrival time (in FCFS scheduling).
However, SJF scheduling algorithm, schedules the processes according to their burst time.

In SJF scheduling, the process with the lowest burst time, among the list of available processes in
the ready queue, is going to be scheduled next.

However, it is very difficult to predict the burst time needed for a process hence this algorithm is
very difficult to implement in the system.

Advantages
1. Maximum throughput
2. Minimum average waiting and turnaround time

Disadvantages
1. May suffer with the problem of starvation
2. It is not implementable because the exact Burst time for a process can't be known in
advance.

Page 23
Round Robin CPU Scheduling

Round Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is ever
used in the history of CPU Scheduling Algorithms. Round Robin CPU Scheduling uses Time
Quantum (TQ). The Time Quantum is something which is removed from the Burst Time and lets
the chunk of process to be completed.

Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is carried out
cyclically. The system defines a specific time slice, known as a time quantum.

Advantages

1. A fair amount of CPU is allocated to each job.


2. Because it doesn't depend on the burst time, it can truly be implemented in the system.
3. It is not affected by the convoy effect or the starvation problem as occurred in First Come
First Serve CPU Scheduling Algorithm.

Disadvantages

1. Low Operating System slicing times will result in decreased CPU output.
2. Round Robin CPU Scheduling approach takes longer to swap contexts.
3. Time quantum has a significant impact on its performance.
4. The procedures cannot have priorities established.

Priority Scheduling

In Priority scheduling, there is a priority number assigned to each process. In some systems, the
lower the number, the higher the priority. While, in the others, the higher the number, the higher
will be the priority. The Process with the higher priority among the available processes is given
the CPU. There are two types of priority scheduling algorithm exists. One is Preemptive priority
scheduling while the other is Non Preemptive Priority scheduling.

Deadlock Prevention

 Deadlocks can be prevented by preventing at least one of the four required conditions:

1 Mutual Exclusion

 Shared resources such as read-only files do not lead to deadlocks.

Page 24
 Unfortunately some resources, such as printers and tape drives, require exclusive access
by a single process.

2 Hold and Wait

 To prevent this condition processes must be prevented from holding one or more
resources while simultaneously waiting for one or more others. There are several
possibilities for this:
o Require that all processes request all resources at one time. This can be wasteful
of system resources if a process needs one resource early in its execution and
doesn't need some other resource until much later.
o Require that processes holding resources must release them before requesting new
resources, and then re-acquire the released resources along with the new ones in a
single new request. This can be a problem if a process has partially completed an
operation using a resource and then fails to get it re-allocated after releasing it.
o Either of the methods described above can lead to starvation if a process requires
one or more popular resources.

3 No Preemption

 Preemption of process resource allocations can prevent this condition of deadlocks, when
it is possible.
o One approach is that if a process is forced to wait when requesting a new
resource, then all other resources previously held by this process are implicitly
released, ( preempted ), forcing this process to re-acquire the old resources along
with the new resources in a single request, similar to the previous discussion.
o Another approach is that when a resource is requested and not available, then the
system looks to see what other processes currently have those resources and are
themselves blocked waiting for some other resource. If such a process is found,
then some of their resources may get preempted and added to the list of resources
for which the process is waiting.
o Either of these approaches may be applicable for resources whose states are easily
saved and restored, such as registers and memory, but are generally not applicable
to other devices such as printers and tape drives.

4 Circular Wait

 One way to avoid circular wait is to number all resources, and to require that processes
request resources only in strictly increasing ( or decreasing ) order.
 In other words, in order to request resource Rj, a process must first release all Ri such
that i >= j.
 One big challenge in this scheme is determining the relative ordering of the different
resources

Page 25
Deadlock Avoidance

 The general idea behind deadlock avoidance is to prevent deadlocks from ever
happening, by preventing at least one of the aforementioned conditions.
 This requires more information about each process, AND tends to lead to low device
utilization. ( I.e. it is a conservative approach. )
 In some algorithms the scheduler only needs to know the maximum number of each
resource that a process might potentially use. In more complex algorithms the scheduler
can also take advantage of the schedule of exactly what resources may be needed in what
order.
 When a scheduler sees that starting a process or granting resource requests may lead to
future deadlocks, then that process is just not started or the request is not granted.
 A resource allocation state is defined by the number of available and allocated resources,
and the maximum requirements of all processes in the system.

Safe State

 A state is safe if the system can allocate all resources requested by all processes ( up to
their stated maximums ) without entering a deadlock state.
 More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2, ...,
PN } such that all of the resource requests for Pi can be granted using the resources
currently allocated to Pi and all processes Pj where j < i. ( I.e. if all the processes prior to
Pi finish and free up their resources, then Pi will be able to finish also, using the
resources that they have freed up. )
 If a safe sequence does not exist, then the system is in an unsafe state, which MAY lead to
deadlock. ( All safe states are deadlock free, but not all unsafe states lead to deadlocks. )

Figure - Safe, unsafe, and deadlocked state spaces.

Page 26
Scheduling Queues

 The Job Queue stores all processes that are entered into the system.
 The Ready Queue holds processes in the ready state.
 Device Queues hold processes that are waiting for any device to become available. For
each I/O device, there are separate device queues.
The ready queue is where a new process is initially placed. It sits in the ready queue, waiting to
be chosen for execution or dispatched. One of the following occurrences can happen once the
process has been assigned to the CPU and is running:

 The process could send out an I/O request before being placed in the I/O queue.
 The procedure could start a new one and then wait for it to finish.
 As a result of an interrupt, the process could be forcibly removed from the CPU and
returned to the ready queue.

The process finally moves from the waiting to ready state in the first two circumstances and then
returns to the ready queue. This cycle is repeated until a process is terminated, at which point it is
withdrawn from all queues, and its PCB and resources are reallocated.

Page 27
Unit 5
Memory Management

What is Memory?

Computer memory can be defined as a collection of some data represented in the binary format.
On the basis of various functions, memory can be classified into various categories. We will
discuss each one of them later in detail.

A computer device that is capable to store any information or data temporally or permanently, is
called storage device.

Fixed Partitioning

The earliest and one of the simplest technique which can be used to load more than one
processes into the main memory is Fixed partitioning or Contiguous memory allocation.

In this technique, the main memory is divided into partitions of equal or different sizes. The
operating system always resides in the first partition while the other partitions can be used to
store user processes. The memory is assigned to the processes in contiguous way.

In fixed partitioning,

1. The partitions cannot overlap.


2. A process must be contiguously present in a partition for the execution.

1. Internal Fragmentation

If the size of the process is lesser then the total size of the partition then some size of the partition
get wasted and remain unused. This is wastage of the memory and called internal fragmentation.

As shown in the image below, the 4 MB partition is used to load only 3 MB process and the
remaining 1 MB got wasted.

2. External Fragmentation

The total unused space of various partitions cannot be used to load the processes even though
there is space available but not in the contiguous form.

Page 28
As shown in the image below, the remaining 1 MB space of each partition cannot be used as a
unit to store a 4 MB process. Despite of the fact that the sufficient space is available to load the
process, process will not be loaded.

Dynamic Partitioning

Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this
technique, the partition size is not declared initially. It is declared at the time of process loading.

The first partition is reserved for the operating system. The remaining space is divided into parts.
The size of each partition will be equal to the size of the process. The partition size varies
according to the need of the process so that the internal fragmentation can be avoided.

Advantages of Dynamic Partitioning over fixed partitioning

1. No Internal Fragmentation

Given the fact that the partitions in dynamic partitioning are created according to the need of the
process, It is clear that there will not be any internal fragmentation because there will not be any
unused remaining space in the partition.

2. No Limitation on the size of the process

In Fixed partitioning, the process with the size greater than the size of the largest partition could
not be executed due to the lack of sufficient contiguous memory. Here, In Dynamic partitioning,
the process size can't be restricted since the partition size is decided according to the process
size.

3. Degree of multiprogramming is dynamic

Due to the absence of internal fragmentation, there will not be any unused space in the partition
hence more processes can be loaded in the memory at the same time.

Disadvantages of dynamic partitioning

External Fragmentation

Absence of internal fragmentation doesn't mean that there will not be external fragmentation.

Let's consider three processes P1 (1 MB) and P2 (3 MB) and P3 (1 MB) are being loaded in the
respective partitions of the main memory.

Page 29
After some time P1 and P3 got completed and their assigned space is freed. Now there are two
unused partitions (1 MB and 1 MB) available in the main memory but they cannot be used to
load a 2 MB process in the memory since they are not contiguously located.

Bit Map for Dynamic Partitioning

The Main concern for dynamic partitioning is keeping track of all the free and allocated
partitions. However, the Operating system uses following data structures for this task.

1. Bit Map
2. Linked List

Bit Map is the least famous data structure to store the details. In this scheme, the main memory is
divided into the collection of allocation units. One or more allocation units may be allocated to a
process according to the need of that process. However, the size of the allocation unit is fixed
that is defined by the Operating System and never changed. Although the partition size may vary
but the allocation size is fixed.

The main task of the operating system is to keep track of whether the partition is free or filled.
For this purpose, the operating system also manages another data structure that is called bitmap.

The process or the hole in Allocation units is represented by a flag bit of bitmap. In the image
shown below, a flag bit is defined for every bit of allocation units. However, it is not the general
case, it depends on the OS that, for how many bits of the allocation units, it wants to store the
flag bit.

Linked List for Dynamic Partitioning

The better and the most popular approach to keep track the free or filled partitions is using
Linked List.

In this approach, the Operating system maintains a linked list where each node represents each
partition. Every node has three fields.

1. First field of the node stores a flag bit which shows whether the partition is a hole or
some process is inside.
2. Second field stores the starting index of the partition.
3. Third filed stores the end index of the partition.

If a partition is freed at some point of time then that partition will be merged with its adjacent
free partition without doing any extra effort.

Page 30
Virtual Memory in OS

Virtual Memory is a storage scheme that provides user an illusion of having a very big main
memory. This is done by treating a part of secondary memory as the main memory.

In this scheme, User can load the bigger size processes than the available main memory by
having the illusion that the memory is available to load the process.

Instead of loading one big process in the main memory, the Operating System loads the different
parts of more than one process in the main memory.

By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased.

Advantages of Virtual Memory

1. The degree of Multiprogramming will be increased.


2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory

1. The system becomes slower since swapping takes time.


2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.

What is Fragmentation?

"Fragmentation is a process of data storage in which memory space is used inadequately,


decreasing ability or efficiency and sometimes both." The precise implications of fragmentation
depend on the specific storage space allocation scheme in operation and the particular
fragmentation type. In certain instances, fragmentation contributes to "unused" storage capacity,
and the concept also applies to the unusable space generated in that situation.

Internal Fragmentation

Most memory space is often reserved than is required to adhere to the restrictions regulating
storage space. For instance, memory can only be supplied in blocks (multiple of 4) to systems,
and as an outcome, if a program demands maybe 29 bytes, it will get a coalition of 32 bytes. The
surplus storage goes to waste when this occurs. The useless space is found inside an assigned
area in this case.

Page 31
This structure, called fixed segments, struggles from excessive memory-any process consumes
an enormous chunk, no matter how insignificant.

Internal fragmentation is what this garbage is termed. Unlike many other forms of fragmentation,
it is impossible to restore inner fragmentation, typically, the only way to eliminate it is with a
new design.

For instance, in dynamic storage allocation, storage reservoirs reduce internal fragmentation
significantly by extending the space overhead over a more significant number of elements.

External Fragmentation

When used storage is differentiated into smaller lots and is punctuated by assigned memory
space, external fragmentation occurs. It is a weak point of many storage allocation
methodologies when they cannot effectively schedule memory used by systems.

The consequence is that, while unused storage is available, it is essentially inaccessible since it is
separately split into fragments that are too limited to meet the software's requirements. The word
"external" derives from the fact that the inaccessible space is stored outside the assigned regions.

Page 32
You can see in the figure mentioned above that there is sufficient memory space (55 KB) to
execute a process-07 (50 KB mandated), but the storage (fragment) is not adjacent. Here, to use
the empty room to run a procedure, you can use compression, paging, or segmentation strategies.

Sr. Internal Fragmentation External Fragmentation


No.

1. Frames square measure designated for Variable-sized memory frames square measure
processing in internal fragmentation of designated to the process during external
fixed-sized storage. fragmentation.

2. When the system or procedure is greater Whenever the system or procedure is


than the storage, internal fragmentation withdrawn, external fragmentation occurs.
occurs.

3. The internal fragmentation approach is Compression, paging, and differentiation are


the frame with the perfect match. alternatives to external fragmentation.

4. Internal fragmentation happens External fragmentation happens whenever the


whenever the storage is split into storage is split into segments of variable size
fragments of a fixed length. depending on the process length.

5. The distinction between the assigned The empty spaces created among non-
memory and the storage or memory contiguous pieces of storage are too tiny for a
needed is considered as internal new system to operate, considered as external
fragmentation. fragmentation.

Page 33
Page Replacement Algorithms in Operating Systems

1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this
algorithm, the operating system keeps track of all pages in the memory in a queue, the oldest
page is in the front of the queue. When a page needs to be replaced page in the front of the
queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the
number of page faults.

2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used
for the longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frame. Find number of page fault.

Page 34
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page
frames. Find number of page faults.

Page 35
Paging vs Segmentation

Sr Paging Segmentation
No.

1 Non-Contiguous memory allocation Non-contiguous memory allocation

2 Paging divides program into fixed size Segmentation divides program into variable
pages. size segments.

3 OS is responsible Compiler is responsible.

4 Paging is faster than segmentation Segmentation is slower than paging

5 Paging is closer to Operating System Segmentation is closer to User

6 It suffers from internal fragmentation It suffers from external fragmentation

7 There is no external fragmentation There is no external fragmentation

8 Logical address is divided into page Logical address is divided into segment
number and page offset number and segment offset

9 Page table is used to maintain the page Segment Table maintains the segment
information. information

10 Page table entry has the frame number Segment table entry has the base address of
and some flag bits to represent details the segment and some protection bits for the
about pages. segments.

Page 36
Unit 6
File management

File

A file can be defined as a data structure which stores the sequence of records. Files are stored in
a file system, which may exist on a disk or in the main memory. Files can be simple (plain text)
or complex (specially-formatted).

The collection of files is known as Directory. The collection of directories at the different levels,
is known as File System.

Attributes of the File

1.Name

Every file carries a name by which the file is recognized in the file system. One directory cannot
have two files with the same name.

2.Identifier

Along with the name, Each File has its own extension which identifies the type of the file. For
example, a text file has the extension .txt, A video file can have the extension .mp4.

3.Type

In a File System, the Files are classified in different types such as video files, audio files, text
files, executable files, etc.

4.Location

In the File System, there are several locations on which, the files can be stored. Each file carries
its location as its attribute.

5.Size

The Size of the File is one of its most important attribute. By size of the file, we mean the
number of bytes acquired by the file in the memory.

6.Protection

The Admin of the computer may want the different protections for the different files. Therefore
each file carries its own set of permissions to the different group of Users.

Page 37
7.Time and Date

Every file carries a time stamp which contains the time and date on which the file is last
modified.

Operations on the File

1.Create operation:

This operation is used to create a file in the file system. It is the most widely used operation
performed on the file system. To create a new file of a particular type the associated application
program calls the file system. This file system allocates space to the file. As the file system
knows the format of directory structure, so entry of this new file is made into the appropriate
directory.

2. Open operation:

This operation is the common operation performed on the file. Once the file is created, it must be
opened before performing the file processing operations. When the user wants to open a file, it
provides a file name to open the particular file in the file system. It tells the operating system to
invoke the open system call and passes the file name to the file system.

3. Write operation:

This operation is used to write the information into a file. A system call write is issued that
specifies the name of the file and the length of the data has to be written to the file. Whenever the
file length is increased by specified value and the file pointer is repositioned after the last byte
written.

4. Read operation:

This operation reads the contents from a file. A Read pointer is maintained by the OS, pointing
to the position up to which the data has been read.

5. Re-position or Seek operation:

The seek system call re-positions the file pointers from the current position to a specific place in
the file i.e. forward or backward depending upon the user's requirement. This operation is
generally performed with those file management systems that support direct access files.

6. Delete operation:

Deleting the file will not only delete all the data stored inside the file it is also used so that disk
space occupied by it is freed. In order to delete the specified file the directory is searched. When
the directory entry is located, all the associated file space and the directory entry is released.

Page 38
7. Truncate operation:

Truncating is simply deleting the file except deleting attributes. The file is not completely
deleted although the information stored inside the file gets replaced.

8. Close operation:

When the processing of the file is complete, it should be closed so that all the changes made
permanent and all the resources occupied should be released. On closing it deallocates all the
internal descriptors that were created when the file was opened.

File Access Methods

Sequential Access

Most of the operating systems access the file sequentially. In other words, we can say that most
of the files need to be accessed sequentially by the operating system.

In sequential access, the OS read the file word by word. A pointer is maintained which initially
points to the base address of the file. If the user wants to read first word of the file then the
pointer provides that word to the user and increases its value by 1 word. This process continues
till the end of the file.

Page 39
Direct Access

The Direct Access is mostly required in the case of database systems. In most of the cases, we
need filtered information from the database. The sequential access can be very slow and
inefficient in such cases.

Suppose every block of the storage stores 4 records and we know that the record we needed is
stored in 10th block. In that case, the sequential access will not be implemented because it will
traverse all the blocks in order to access the needed record.

Direct access will give the required result despite of the fact that the operating system has to
perform some complex tasks such as determining the desired block number. However, that is
generally implemented in database applications.

Directory Structure in OS

Directory can be defined as the listing of the related files on the disk. The directory may store
some or the entire file attributes.

Single Level Directory

The simplest method is to have one big list of all the files on the disk. The entire system will
contain only one directory which is supposed to mention all the files present in the file system.
The directory contains one entry per each file present on the file system.

This type of directories can be used for a simple system.

Page 40
Advantages
1. Implementation is very simple.
2. If the sizes of the files are very small then the searching becomes faster.
3. File creation, searching, deletion is very simple since we have only one directory.

Disadvantages
1. We cannot have two files with the same name.
2. The directory may be very big therefore searching for a file may take so much time.
3. Protection cannot be implemented for multiple users.
4. There are no ways to group same kind of files.

Two Level Directory

In two level directory systems, we can create a separate directory for each user. There is one
master directory which contains separate directories dedicated to each user. For each user, there
is a different directory present at the second level, containing group of user's file. The system
doesn't let a user to enter in the other user's directory without permission.

Tree Structured Directory

In Tree structured directory system, any directory entry can either be a file or sub directory. Tree
structured directory system overcomes the drawbacks of two level directory system. The similar
kind of files can now be grouped in one directory.

Page 41
Each user has its own directory and it cannot enter in the other user's directory. However, the
user has the permission to read the root's data but he cannot write or modify this. Only
administrator of the system has the complete access of root directory.

Searching is more efficient in this directory structure. The concept of current working directory
is used. A file can be accessed by two types of path, either relative or absolute.

Absolute path is the path of the file with respect to the root directory of the system while relative
path is the path with respect to the current working directory of the system. In tree structured
directory systems, the user is given the privilege to create the files as well as directories.

File Allocation Methods

There are various methods which can be used to allocate disk space to the files. Selection of an
appropriate allocation method will significantly affect the performance and efficiency of the
system. Allocation method provides a way in which the disk will be utilized and the files will be
accessed.

Contiguous Allocation

If the blocks are allocated to the file in such a way that all the logical blocks of the file get the
contiguous physical block in the hard disk then such allocation scheme is known as contiguous
allocation.

In the image shown below, there are three files in the directory. The starting block and the length
of each file are mentioned in the table. We can check in the table that the contiguous blocks are
assigned to each file as per its need.

Page 42
Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.

Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.

Linked List Allocation

Linked List allocation solves all problems of contiguous allocation. In linked list allocation, each
file is considered as the linked list of disk blocks. However, the disks blocks allocated to a
particular file need not to be contiguous on the disk. Each disk block allocated to a file contains a
pointer which points to the next disk block allocated to the same file.

Page 43
Advantages
1. There is no external fragmentation with linked allocation.
2. Any free block can be utilized in order to satisfy the file block requests.
3. File can continue to grow as long as the free blocks are available.
4. Directory entry will only contain the starting block address.

Disadvantages
1. Random Access is not provided.
2. Pointers require some space in the disk blocks.
3. Any of the pointers in the linked list must not be broken otherwise the file will get
corrupted.
4. Need to traverse each block.

Indexed Allocation

Instead of maintaining a file allocation table of all the disk pointers, Indexed allocation scheme
stores all the disk pointers in one of the blocks called as indexed block. Indexed block doesn't
hold the file data, but it holds the pointers to all the disk blocks allocated to that particular file.
Directory entry will only contain the index block address.

Page 44
Advantages
1. Supports direct access
2. A bad data block causes the lost of only that block.

Disadvantages
1. A bad index block could cause the lost of entire file.
2. Size of a file depends upon the number of pointers, a index block can hold.
3. Having an index block for a small file is totally wastage.
4. More pointer overhead

RAID

RAID (redundant array of independent disks) is a setup consisting of multiple disks for data
storage. They are linked together to prevent data loss and/or speed up performance.

Page 45
RAID 0: Striping

RAID 0, also known as a striped set or a striped volume, requires a minimum of two disks. The
disks are merged into a single large volume where data is stored evenly across the number of
disks in the array.

RAID 1: Mirroring

RAID 1 is an array consisting of at least two disks where the same data is stored on each to
ensure redundancy. The most common use of RAID 1 is setting up a mirrored pair consisting of
two disks in which the contents of the first disk is mirrored in the second. This is why such a
configuration is also called mirroring.

Page 46
Raid 2: Bit-Level Striping with Dedicated Hamming-Code Parity

RAID 2 is rarely used in practice today. It combines bit-level striping with error checking and
information correction. This RAID implementation requires two groups of disks – one for
writing the data and another for writing error correction codes. RAID 2 also requires a special
controller for the synchronized spinning of all disks.

Raid 3: Bit-Level Striping with Dedicated Parity

Like RAID 2, RAID 3 is rarely used in practice. This RAID implementation utilizes bit-level
striping and a dedicated parity disk. Because of this, it requires at least three drives, where two
are used for storing data strips, and one is used for parity.

Page 47
Raid 4: Block-Level Striping with Dedicated Parity

RAID 4 is another unpopular standard RAID level. It consists of block-level data striping across
two or more independent diss and a dedicated parity disk.

The implementation requires at least three disks – two for storing data strips and one dedicated
for storing parity and providing redundancy

Raid 5: Striping with Parity

RAID 5 is considered the most secure and most common RAID implementation. It combines
striping and parity to provide a fast and reliable setup. Such a configuration gives the user
storage usability as with RAID 1 and the performance efficiency of RAID 0.

Page 48
Raid 6: Striping with Double Parity

RAID 6 is an array similar to RAID 5 with an addition of its double parity feature. For this
reason, it is also referred to as the double-parity RAID.

This setup requires a minimum of four drives. The setup resembles RAID 5 but includes two
additional parity blocks distributed across the disk. Therefore, it uses block-level striping to
distribute the data across the array and stores two parity blocks for each data block.

Page 49

You might also like