0% found this document useful (0 votes)
10 views31 pages

Group 5 OS Work

The document presents an overview of deadlocks in operating systems, detailing their definitions, necessary conditions, prevention strategies, and detection methods. It also covers disk management, including disk structures, scheduling algorithms, reliability metrics, and methods to improve disk reliability. Key concepts such as the Banker's algorithm for deadlock avoidance and RAID configurations for data redundancy are discussed.

Uploaded by

markkifunye159
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views31 pages

Group 5 OS Work

The document presents an overview of deadlocks in operating systems, detailing their definitions, necessary conditions, prevention strategies, and detection methods. It also covers disk management, including disk structures, scheduling algorithms, reliability metrics, and methods to improve disk reliability. Key concepts such as the Banker's algorithm for deadlock avoidance and RAID configurations for data redundancy are discussed.

Uploaded by

markkifunye159
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

OPERATING SYSTEM PRESENTATION WORK

GROUP V
LECTURER: MADAM BARBARA ASINGIRWE KABWIGA

GROUP MEMBERS

AJUNGO CALEB CHRISTIAN BU/UP/2023/0952


MUNGUECONI JERRY BU/UP/2023/0958
NAMBOOWA STELLA BU/UP/2023/0961
ODEKE RICHARD PATRICK BU/UP/2023/0965
OMARA ANDREW BU/UP/2023/3642
SWAGA RICHARD BU/UP/2023/1069
• DEFINITIONS: DEADLOCKS
Deadlocks occur in a computing system when two
or more processes are waiting for each other to
release resources they hold, resulting in a stalemate
where no process can proceed.
OR
Deadlock is a condition where each process is
waiting for an event to occur that is held by any
other process in the system.
NECESSARY AND SUFFICIENT CONDITIONS FOR
DEADLOCK
ALL of these four must happen simultaneously for a deadlock to occur:
1.Mutual exclusion:
One or more than one resource must be held by a process in a non-
sharable (exclusive) mode.
2.Hold and Wait:
A process holds a resource while waiting for another resource.
3.No Preemption:
There is only voluntary release of a resource - nobody else can
make a process give up a resource.
4.Circular Wait: A circular chain of processes exist, where each
process is waiting for a resource held by the next process in
the chain
DEADLOCK PREVENTION

This involves implementing the strategies to


ensure that the four necessary conditions for the
dead lock do not occur.
1. RESOURCE ORDERING
• Assign a unique number to each resource.
• Require processes to acquire resources in
ascending order of their numbers.
• This will prevent circular wait has processes will
request resources in a predetermined order.
2. BREAKING THE HOLD AND WAIT CONDITION

• Require a process to acquire all the necessary resources before it


starts executing. This ensures a process doesn’t hold any resource
while waiting for others.
• However this can be inefficient due to underutilization of resources.

3. PREVENTING MUTUAL EXCLUSION


• Allow concurrent access to resources whenever possible. This can be
difficult in certain situations, such as when resources are inherently
non-sharable(e.g printers)

4. PREEMPTION
• Allow the system to forcefully take away the resource from a process
if doing so will prevent a deadlock. This can be done by setting the
maximum time limit for a process to hold a resource. If the limit is
exceeded the resource is preempted.
DEADLOCK AVOIDANCE

Deadlock avoidance is more sophisticated approach than prevention, as it allows


for more flexibility in resource allocation while still preventing the deadlock.
It involves analyzing the current state of the system and future resource requests
to determine if a deadlock is possible.
Safe state
A system is in safe state if there exist a sequence of processes that can complete
their execution without causing a deadlock.
Deadlock:
No forward progress can be made.
Unsafe state:
A state that may allow deadlock.
The rule is simple: If a request allocation would cause an unsafe state, do not
honor that request
BANKERS ALGORITHM

• Bankers algorithm is a safe state algorithm which ensures that a


system remains in a safe state.
• It is named after a banking analogy, where a bank should make
sure that it has sufficient funds to meet all withdrawal requests.
• It maintains a matrix of available resources, a matrix of
maximum resource needs for each process, and a matrix of
currently allocated resources.
• When a process requests resources, the algorithm checks if
granting the request would lead to a safe state. If so, the
request is granted; otherwise, it is denied.
DEADLOCK DETECTION AND RECOVERY
DEADLOCK DETECTION
• Deadlock detection involves identifying whether a deadlock has occurred in a
system or not.
There are two primary methods for detecting deadlocks
TIMEOUT:
This approach sets the maximum time limit for a process to acquire a resource.
If a process exceeds this limit, it is assumed to be in a deadlock. While simple
this method can lead to false positives if the actual resource acquisition is
longer than expected.
Resource Allocation Graphs ( RAG)
This creates a graph where nodes represent processes and edges represent
resource allocation or requests. A cycle in the graph represents a deadlock.
RAG- based detection can be more accurate but can be computationally
expensive for large systems.
.
DEADLOCK RECOVERY
Once detected, deadlock recovery strategies can be employed to solve the
situation and allow the system to continue operating.
There are several strategies for deadlock recovery.
1. Process Termination:
One or more processes involved in the deadlock can be terminated. This is a
drastic measure and can lead to data loss or inconsistent system state.
2. Resource Preemption
Resources can be forcibly taken from processes, allowing other processes to
proceed. However, this can lead to rollbacks and additional overhead.
3. Rollback
Processes can be rolled back to a previous state where no deadlock existed.
This requires maintaining checkpoints or logs.
Disk management refers to the process of managing the storage
DISK MANAGEMENT
devices in a computer, such has hard drives, SSDs or external storage
It involves tasks like;
• Partitioning:
Dividing the physical disk into multiple sections (Partitions) to
manage data more effectively.
• Formatting:
preparing a partition for use by installing a file system
(like NTFS, FAT32, or ext4).
• Mounting/Assigning Drive Letters:
Associating partitions with specific letters (like c: or D:) in the
operating system.
• Creating/Deleting Volumes:
Adding new storage or clearing old partitions for reuse.
DISK STRUCTURES
Disk structures refer to the way data is stored, and accessed on the storage devices
like hard drives, SSDs or optical discs.
They are crucial for how operating systems manage files and ensure efficient use of
storage.
COMPONENTS OF DISK STRUCTURES
• Sectors
This is the smallest physical storage unit on a disk. Data is stored in sectors
typically 512 bytes or 4kb in size
• Clusters
A group of sectors combined to form the smallest allocatable unit for file
storage. This reduces the overhead of managing very small files, but larger
clusters can waste space if files don’t fill them
• Tracks and Cylinders
Tracks are concentric cycles on the surface of the disk where data is recorded.
Cylinders are groups of tracks at the same position on each disk platter in a
hard drive. Accessing data within the same cylinder can be faster, as the write and
read head doesn’t have to move much.
Hard Disk
Track
Block

vv

Sector
• File Systems
They determine hoe data is stored and retrieved from a disk.
Common file systems include NTFS, FAT32, ext4, etc. The file system
also manages metadata like file permissions, timestamps, and more.
• Inodes;
These are data structures that store metadata about files, such as file
size, owner, and pointers to the actual data blocks.
• Master Boot Record(MBR) and GUID partition Table (GPT)
Structures that store information about disk partitions and help the
operating system to boot.
MBR is older and supports up to 4 partitions, while GPT supports more
partitions and larger disks.
DISK SCHEDULING
Disk scheduling refers to the method used by the operating system to
determine the order in which disk read and write requests are processed.
Since hard drives have mechanical components (like read and write
heads that need to move to different parts of the disk), efficient
scheduling minimizes seek time (the time it takes for the head to reach
the desired location on the disk), which improves system performance.

The following are the main types of disk scheduling algorithms

FIRST-COME, FIRST SERVED(FCFS)


It is the simplest form of disk scheduling algorithms. The I/O requests are
served or processed according to their order of arrival. The request
arrives first will be accessed and served first.
Simple but not efficient, as it can lead to long seek times when requests
are far part on the disk
SHORTEST SEEK TIME FIRST(SSTF)
The disk selects the request closest to the current head position.
Reduces seek time compared to FCFS but can lead to starvation.
SCAN (ELEVATOR ALGORITHM)
The disk head moves in one direction(from the outermost to innermost track) and
processes requests along the way. Once it reaches the end, it reverses direction.
This algorithm is efficient as it reduces the average seek time and doesn’t overly favor
requests close to the current head position.
It provides a more balanced approach but may still lead to starvation

C-SCAN
Similar to SCAN, but the head only moves in one direction(from outer to inner). When
it reaches the end, it quickly returns to the starting point without servicing requests
during the return, hence the name Circular SCAN
It prevents starvation but can lead to longer time wait times for the for the requests
on the opposite side of the disk
PRACTICAL EXAMPLES ON DISK SCHEDULING
Given the following track requests in the disk
queue, compute for the Total Head Movement
(THM) of the read/write head : 95, 180, 34, 119,
11, 123, 62, 64

Consider that the read/write head is positioned at


location 50. Prior to this track location 199 was
serviced. Show the total head movement for a 200
track disk (0-199)
FCFS
Total Head Movement Computation:

THM = (180 - 50) + (180-34) + (119-34) + (119-11) + (123-


11) + (123-62) + (64-62) =

130 + 146 + 85 + 108 + 112 + 61 + 2 (THM) = 644 tracks


SSTF
Total Head Movement Computation
(THM) = (64-50) + (64-11) + (180-11) =

14 + 53 + 169 (THM) = 236


C-SCAN
Total Head Movement Computation
(THM) = (50-0) + (199-62) + α
= 50 + 137 + 20 (THM)

= 207 tracks
DISK RELIABILITY
Disk reliability refers to the ability of a storage device to function
correctly over time without failure or data loss.

Enduring disk reliability is crucial for maintaining data integrity, system


performance, and avoiding costly downtime.

FACTORS AFFECTING DISK RELIABILITY


Physical Wear and Tear :
Repeated read and write operations can cause wear on the disk
components, leading to degradation and potential failure.
Manufacturing defects:
Defects in the manufacturing process can result in faulty disks that are
prone to failure.
Power Failures:
Sudden power outrages or fluctuations can disrupt disk operations and
potentially damage the disk.
Human Errors:
Accidental damage, improper handling, and incorrect configuration
can contribute to disk failures.
Environmental Factors:
Extreme temperatures, humidity, and vibrations can affect disk
performance and longevity.
Improving Disk Reliability
Redundancy:
Redundancy refers to using multiple disks or storage systems to
ensure data protection.
Implementing RAID(Redundant Array of Independent Disks)
configurations can provide data redundancy and fault tolerance.
Key types of redundancy in disks
• RAID 1(Mirroring)
data is duplicated across two or more disks. If one disk fails, the
system can continue functioning using the other disks containing the
same data.
• RAID 5(Parity-based redundancy)
Parity is a concept used in data storage systems to detect and
correct errors, ensuring data can be recreated incase of disk failure.
Data is striped across multiple disks, with one disk used for
sorting parity information. If one disk fails, the system can use the
parity data to reconstruct the lost data.
• RAID 6(Parity-based redundancy)
Similar to RAID 5 but with more parity, allowing the system to
recover even if two disks fail.
• RAID 10(1+0)
It offers both mirroring and striping for increased redundancy
and performance.
Regular Backups:
Data backups are copies of digital data that are stored separately from the
original data.
Data backups are important recovery in case of disk failures

Disk Monitoring
Monitoring disk health metrics, such as SMART(Self-Monitoring Analysis
Reporting Technology) attributes, can help identify potential issues early on.

Power Protection
Using uninterruptible power supplies (UPS) can protect disks from power
outages and fluctuations.

Proper Handling and Storage


Avoid physical shocks, extreme temperatures, and excessive vibrations to
minimize the risk of damage.

Regular Maintenance
Periodic maintenance, including cleaning and testing, can help prevent disk
failures.
DISK RELIABILITY METRICS
These are measures used to assess and predict the reliability, lifespan, and
overall performance of storage devices.
These metrics help determine the likelihood of a disk failure, how well it can
handle data under various conditions, and how long it will last under normal
usage.

Here are some of the common disk reliability metrics


Mean Time Between Failures(MTBF)
This often indicates the time a disk is expected to operate with failure. It’s often
expressed in hours or years.
A higher MTBF indicates greater reliability.

Annualized Failure Rate(AFR)


This metric represents the probability of a disk failing within a year. It’s usually
expressed as a percentage.
A lower AFR indicates greater reliability.

Data Loss Probability(DLP)


This metric measures the likelihood of data loss due to disk failures, considering
factors like redundancy and backup strategies. A lower DLP indicates a lower
risk of data loss.
Error Rates
These metrics measure the frequency of errors that occur during disk
operations, such as read errors, write errors, and seek errors. Higher error
rates can be indicative of underlying issues that may lead to failure.

Smart(Self-Monitoring Analysis Reporting Technology) Attributes


SMART is a technology built in to most modern disks that monitors various
parameters related to disk health, such as temperature, seek error rates, and
power-on hours. By analyzing SMART attributes, you can identify potential
issues and take preventive measures.

You might also like