0% found this document useful (0 votes)
3 views34 pages

Operating System

The document provides an overview of operating systems, detailing their functions, features, and components such as process management, memory management, and security. It also explains the Process Control Block (PCB) and its role in managing process information, along with advantages and disadvantages of using PCBs. Additionally, it covers CPU scheduling, including various algorithms and their purposes in managing process execution efficiently.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views34 pages

Operating System

The document provides an overview of operating systems, detailing their functions, features, and components such as process management, memory management, and security. It also explains the Process Control Block (PCB) and its role in managing process information, along with advantages and disadvantages of using PCBs. Additionally, it covers CPU scheduling, including various algorithms and their purposes in managing process execution efficiently.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Operating System

Practical:- 1

 Operating system with all features and diagram.


:- An operating system (OS) acts as the intermediary between users and
computer hardware, managing resources, providing services, and facilitating user
interaction. It's the software that boots the computer, manages resources, handles
file systems, and ensures the smooth execution of programs. The OS handles
various features, including process management, memory management , file system
management, input/output (I/O) device management, and security.

An operating system is a software layer that enables communication between users,


application software, and hardware, while efficiently managing CPU, memory,
storage, and I/O devices to ensure proper functionality and performance of the
computer system.

Key features of Operating System and its role:


1. Managing Input-Output unit: It also allows the computer to manage its
own resources such as memory, monitor, keyboard, printer, etc. Management of
these resources is required for effective and fair utilization.

2. Multitasking: It manages memory and allows multiple programs to run in


their own space and even communicate with each other through shared memory.

3. Manages memory and Files: It manages the computer’s main memory and
second storage. Additionally, it allows and deallocates memory to all tasks and
applications.

Ankita Sharma
Roll no.23022C04012 Operating System Page 1
4. Provides Security: It helps to maintain the system and applications
safe through the authorization process. Thus, the OS provides security to
the system.

5. Process Management: The operating system is responsible for starting,


stopping, and managing processes and programs. It also controls the scheduling of
processes and allocates resources to them.

6. Device Management: Device Administration within An operating system


controls every piece of hardware and virtual device on a PC or computer.
Input/output devices are assigned to processes by the device management system
based on their importance. Depending on the situation, these devices may also be
temporarily or permanently reallocated..

7. Networking : Networking within an operating system (OS) involves the


software and protocols that enable a computer to connect to and communicate
with other devices on a network. This includes managing network traffic, sharing
resources, and ensuring security within the network.

8. Error handling : Error handling in operating systems (OS) is a critical


process that ensures system stability and reliability by detecting, managing,
and responding to failures. These failures can arise from various sources,
including hardware malfunctions, software bugs, or invalid user input.

9. File System Management: File system management within an operating


system (OS) is the process of organizing and managing data on storage devices like
hard drives or SSDs. It provides a structured way for the OS to interact with
physical storage, allowing users and applications to create, access, and manipulate
files efficiently.

Ankita Sharma
Roll no.23022C04012 Operating System Page 2
Diagram:-

Hardware:-

In an operating system (OS), hardware refers to the physical components of


the computer that the OS manages and interacts with. These include
devices like the CPU, RAM, hard drives, and input/output devices.

Kernel:-

In an operating system, a kernel is the core program that manages all other
software and hardware components. It serves as the intermediary between
applications and the underlying hardware, handling low-level tasks like
memory management, process management, and device management.

System Software:-
In an operating system (OS), system software refers to the programs that
manage and control the hardware and software resources of a computer
system. It acts as an interface between the user, application software, and
the underlying hardware.

Ankita Sharma
Roll no.23022C04012 Operating System Page 3
Outermost layer (User layer) :-

In a layered operating system architecture, the outer layer is typically the


user interface (UI) layer. This layer interacts directly with the user,
handling input and output to make the system user-friendly.

 OS diagram illustrates how an operating system manages and


controls hardware and software resources to enable users to interact
with a computer effectively. It shows how the OS acts as an
intermediary between hardware and applications, managing
memory, processes, files, and devices. The OS diagram helps to
understand the various components and their roles in ensuring
efficient and reliable computer operation.

Conclusion:-

An operating system (OS) is a crucial piece of software that manages


hardware and software resources, providing a user-friendly interface for
interacting with a computer. It acts as an intermediary between the user
and the computer's hardware, enabling smooth operation and efficient
resource allocation. OS also play a vital role in ensuring the security and
stability of the computer system by managing user access and protecting
against errors or vulnerabilities.

Ankita Sharma
Roll no.23022C04012 Operating System Page 4
Practical:-2

 PCB IN Detail with Diagram.

:- Process Control Block is a data structure that contains information of the process
related to it. The process control block is also known as a task control block, entry
of the process table, etc. It is very important for process management as the data
structuring for processes is done in terms of the PCB. It also defines the current
state of the operating system.
The process control block (PCB) is used to track the process’s execution status.
Each block of memory contains information about the process state, program
counter, stack pointer, status of opened files, scheduling algorithms, etc.
When the process makes a transition from one state to another, the operating
system must update information in the process’s PCB. A Process Control Block
(PCB) contains information about the process, i.e. registers, quantum, priority, etc.
The Process Table is an array of PCBs, which logically contains a PCB for all of
the current processes in the system.

When a process is created (initialized or installed), the operating system creates a


corresponding process control block, which specifies and tracks the process state
(i.e. new, ready, running, waiting or terminated). Since it is used to track process
information, the PCB plays a key role in context switching.

PCBs are very important for process management for almost all process related
activities. They are accessed and/or updated by many utility programs, like
schedulers and resource managers. As PCBs track process state information, they
play a vital role in context switching.
It acts as an "ID card" for a process, containing details necessary for the OS to
manage its execution, including process state, memory usage, and resource
allocation.

Ankita Sharma
Roll no.23022C04012 Operating System Page 5
A Process Control Block (PCB) is a data structure used by the operating system to
manage information about a process. The process control keeps track of many
important pieces of information needed to manage processes efficiently. The
diagram helps explain some of these key data items.

Pointer: It is a stack pointer that is required to be saved when the


process is switched from one state to another to retain the current position
of the process. A pointer within a Process Control Block (PCB) refers to a
memory address that points to another PCB. Specifically, it's a field within
the PCB that holds the address of the next PCB in a linked list, helping the
OS maintain a queue of ready processes.

Ankita Sharma
Roll no.23022C04012 Operating System Page 6
Process state: It stores the respective state of the process. The process
state in the Process Control Block (PCB) defines the current status of a
process, indicating what it is doing. The PCB, a data structure, stores all
the essential information about a process, including its state. This process
state is critical for managing processes and scheduling them efficiently.
When a process runs, it modifies the state of the system. The current
activity of a given process determines the state of the process in general.

Process number: Every process is assigned a unique id known as


process ID or PID which stores the process identifier. Every process is
assigned a unique id known as process ID or PID which stores the
process identifier.

Program counter: Program Counter stores the counter, which contains


the address of the next instruction that is to be executed for the process.
Program Counter stores the counter, which contains the address of the
next instruction that is to be executed for the process.

Register: Registers in the PCB, it is a data structure. When a processes is


running and it's time slice expires, the current value of process specific
registers would be stored in the PCB and the process would be swapped
out. When the process is scheduled to be run, the register values is read
from the PCB and written to the CPU registers. This is the main purpose
of the registers in the PCB. The "register" section within a Process
Control Block (PCB) refers to a snapshot of the CPU's registers when a
process is not actively executing. It stores the current values of these
registers, which are essential for restoring the process's state when it is
scheduled to run again.

Ankita Sharma
Roll no.23022C04012 Operating System Page 7
Memory limits: This field contains the information about memory
management system used by the operating system. This may include page
tables, segment tables, etc. The "memory limit" field, also known as
memory management information, stores details about how much memory
a process is allowed to use and how it's allocated. This includes
information like base and limit registers, page tables, or segment tables,
depending on the memory management system used by the operating
system.

List of Open files: This information includes the list of files opened for a
process. The list of open files within a Process Control Block (PCB) in an
operating system is a crucial component for managing a process's
interaction with the file system. This list maintains information about all
files currently accessed by that specific process.

 Advantages of PCB ( Process Control Block )

Process Management:
PCBs store vital information about each process, such as its state, memory usage,
and resource allocations, allowing the OS to track and manage them efficiently.

Context Switching:
PCBs facilitate quick switching between processes by saving and loading the
current state of a running process, enabling seamless multitasking.

Scheduling:
PCBs store information like process priorities and execution history, which is
crucial for the OS to make informed scheduling decisions.

Ankita Sharma
Roll no.23022C04012 Operating System Page 8
Resource Management:
PCBs keep track of the resources a process holds or needs, enabling efficient
allocation and deallocation of system resources.

Inter-Process Communication:
PCBs can include fields for managing communication between processes, such as
message queues and semaphores.

 Disadvantages of PCB (Process Control Block)

Memory Overhead:
Each PCB consumes memory, and with numerous processes, the cumulative
memory usage can be significant, potentially affecting system performance.

Complexity:
Using PCBs introduces complexity to the OS, making it more challenging to
develop and maintain.

Reduced Scalability:
The complexity of managing processes with PCBs can limit the scalability of the
OS, especially when dealing with a large number of processes.

Performance Overhead:
The OS needs to maintain and manage the PCB for each process, which can
introduce overhead and potentially reduce overall system performance.

Ankita Sharma
Roll no.23022C04012 Operating System Page 9
Practical:-3

 CPU scheduling with all algorithms.


:- CPU scheduling is the process by which the operating system decides
which of the ready, in-memory processes is to be executed next by the
CPU. In a multiprogramming environment, multiple processes reside in
the ready queue waiting to execute. Because only one process can run at a
time on a CPU, the CPU scheduler is responsible for managing this
execution queue.
CPU scheduling is a process used by the operating system to decide
which task or process gets to use the CPU at a particular time. This is
important because a CPU can only handle one task at a time, but there are
usually many tasks that need to be processed. The following are different
purposes of a CPU scheduling time.
The main objective is to maximize CPU utilization and system throughput
while minimizing waiting time, response time, and turnaround time.

 Process States:-
Understanding CPU scheduling begins with understanding process states:
1. New – Process is being created.
2. Ready – Process is ready to run and waiting for CPU.
3. Running – CPU is executing the process instructions.
4. Waiting/Blocked – Process is waiting for some I/O operation.
5. Terminated – Process has finished execution.

Ankita Sharma
Roll no.23022C04012 Operating System Page 10
Terminologies Used in CPU Scheduling:-
 Arrival Time: The time at which the process arrives in the ready
queue. Arrival time refers to the time when a process enters the
ready queue and becomes eligible for execution. It's a key factor in
CPU scheduling algorithms, influencing how processes are
prioritized and selected for execution

 Completion Time: The time at which the process completes its


execution. "completion time" (CT) refers to the time at which a process
finishes executing its instructions. It's the total time taken from the
process's arrival in the ready queue to its final exit from the system.

 Burst Time: Time required by a process for CPU execution. Burst time
refers to the amount of time a process requires to execute on the CPU. It
represents the CPU time needed for a process to complete its execution,
excluding any I/O time.

 Turn Around Time: Time Difference between completion time and


arrival time. Turnaround time (TAT) is the total time a process spends
from its submission to the system until its completion. It's a key metric
for evaluating the efficiency of CPU scheduling algorithms.

Types of CPU Scheduling Algorithms:-


There are mainly two types of scheduling methods:

 Preemptive Scheduling: Preemptive scheduling is used when a


process switches from running state to ready state or from the waiting
state to the ready state.

Ankita Sharma
Roll no.23022C04012 Operating System Page 11
 Non-Preemptive Scheduling: Non-Preemptive scheduling is used
when a process terminates , or when a process switches from running
state to waiting state.

 CPU Scheduling Algorithms


1. First Come, First Serve (FCFS)

:- FCFS Scheduling is a non-preemptive algorithm, meaning once a


process starts running, it cannot be stopped until it voluntarily
relinquishes the CPU, typically when it terminates or performs I/O.
This method schedules processes in the order they arrive, without
considering priority or other factors.
The mechanics of FCFS are straightforward:
1. Arrival: Processes enter the system and are placed in a queue in the
order they arrive.
2. Execution: The CPU takes the first process from the front of the
queue, executes it until it is complete, and then removes it from the
queue.
3. Repeat: The CPU takes the next process in the queue and repeats the
execution process.

Ankita Sharma
Roll no.23022C04012 Operating System Page 12
 Advantages of FCFS
 The simplest and basic form of CPU Scheduling algorithm
 Every process gets a chance to execute in the order of its arrival. This
ensures that no process is arbitrarily prioritized over another.
 Easy to implement, it doesn't require complex data structures.

2. Shortest Job First (SJF) or Shortest Job Next (SJN)


:- Shortest Job First (SJF) or Shortest Job Next (SJN) is a scheduling
process that selects the waiting process with the smallest execution time
to execute next. This scheduling method may or may not be preemptive.
Significantly reduces the average waiting time for other processes
waiting to be executed.

Implementation of SJF Scheduling


 Sort all the processes according to the arrival time.

 Then select that process that has minimum arrival time and minimum
Burst time.

 After completion of the process make a pool of processes (a ready


queue) that arrives afterward till the completion of the previous process
and select that process in that queue which is having minimum Burst
time.

Ankita Sharma
Roll no.23022C04012 Operating System Page 13
 Advantages of SJF Scheduling

 SJF is better than the First come first serve(FCFS) algorithm as it


reduces the average waiting time.
 SJF is generally used for long term scheduling.
 It is suitable for the jobs running in batches, where run times are
already known.
 SJF is probably optimal in terms of average Turn Around Time (TAT).

3. Shortest Remaining Time First (SRTF).


:- In SRTF, the process with the least time left to finish is selected to run.
The running process will continue until it finishes or a new process with a
shorter remaining time arrives. This way, the process that can finish the
fastest is always given priority.

Advantages of SRTF Scheduling:-

1. Minimizes Average Waiting Time: SRTF reduces the average waiting


time by prioritizing processes with the shortest remaining execution
time.

2. Efficient for Short Processes: Shorter processes get completed faster,


improving overall system responsiveness.

3. Ideal for Time-Critical Systems: It ensures that time-sensitive


processes are executed quickly.

Ankita Sharma
Roll no.23022C04012 Operating System Page 14
4. Round Robin Scheduling
:-Round Robin Scheduling is a method used by operating systems to
manage the execution time of multiple processes that are competing for
CPU attention. It is called "round robin" because the system rotates
through all the processes, allocating each of them a fixed time slice or
"quantum", regardless of their priority.
The primary goal of this scheduling method is to ensure that all processes are
given an equal opportunity to execute, promoting fairness among tasks.

Advantages of Round Robin Scheduling

 Fairness: Each process gets an equal share of the CPU.


 Simplicity: The algorithm is straightforward and easy to implement.
 Responsiveness: Round Robin can handle multiple processes without
significant delays, making it ideal for time-sharing systems.

Disadvantages of Round Robin Scheduling:

 Overhead: Switching between processes can lead to high overhead,


especially if the quantum is too small.

 Underutilization: If the quantum is too large, it can cause the CPU to


feel unresponsive as it waits for a process to finish its time.

Ankita Sharma
Roll no.23022C04012 Operating System Page 15
5. Priority scheduling
:- Priority
scheduling is one of the most common scheduling algorithms
used by the operating system to schedule processes based on their
priority. Each process is assigned a priority value based on criteria such
as memory requirements, time requirements, other resource needs, or the
ratio of average I/O to average CPU burst time. The process with the
highest priority is selected for execution first. If there are multiple
processes sharing the same priority, they are scheduled in the order they
arrived, following a First-Come, First-Served approach.

Non-Preemptive Priority Scheduling

In Non-Preemptive Priority Scheduling, the CPU is not taken


away from the running process. Even if a higher-priority process
arrives, the currently running process will complete first.

Preemptive Priority Scheduling

In Preemptive Priority Scheduling, the CPU can be taken away from


the currently running process if a new process with a higher priority
arrives.

Ankita Sharma
Roll no.23022C04012 Operating System Page 16
6. Highest Response Ratio Next (HRRN) CPU Scheduling

One of the most optimal scheduling algorithms is the Highest Response


:-

Ratio Next (HRNN). This algorithm is a non-preemptive algorithm in


which, HRRN scheduling is done based on an extra parameter, which is
called Response Ratio. Given N processes with their Arrival times and
Burst times, the task is to find the average waiting time and an average
turnaround time using the HRRN scheduling algorithm. It is designed to
improve upon simpler algorithms like First-Come-First-Serve (FCFS)
and Shortest Job Next (SJN) by balancing both the waiting time and the
burst time of processes. A process once selected will run till completion.
below is the formula to calculate the Response Ratio.

Characteristics of HRRN Scheduling

 Highest Response Ratio Next is a non-preemptive CPU Scheduling


algorithm and it is considered as one of the most optimal scheduling
algorithms.
 The criteria for HRRN is Response Ratio, and the mode is Non-
Preemptive.
 HRRN is considered as the modification of the Shortest Job First to
reduce the problem of starvation.
 In comparison with SJF, during the HRRN scheduling algorithm, the
CPU is allotted to the next process which has the highest response
ratio, and not to the process having less burst time.

Ankita Sharma
Roll no.23022C04012 Operating System Page 17
7. Multilevel Feedback Queue Scheduling (MLFQ)
:- Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is
like Multilevel Queue(MLQ) Scheduling but in this process can move
between the queues. And thus, much more efficient than multilevel queue
scheduling.

Characteristics of Multilevel Feedback Queue Scheduling:

 In a multilevel queue-scheduling algorithm, processes are permanently


assigned to a queue on entry to the system, and processes are allowed
to move between queues.
 As the processes are permanently assigned to the queue, this setup has
the advantage of low scheduling overhead,

Advantages of Multilevel Feedback Queue Scheduling:

 It is more flexible.

 It allows different processes to move between different queues.

 It prevents starvation by moving a process that waits too long for the
lower priority queue to the higher priority queue.

Ankita Sharma
Roll no.23022C04012 Operating System Page 18
Practical:- 4

 Pagging in detail
:- Paging is the process of moving parts of a program, called pages, from
secondary storage (like a hard drive) into the main memory (RAM). The
main idea behind paging is to break a program into smaller fixed-size
blocks called pages.
Paging is a memory management technique that addresses common
challenges in allocating and managing memory efficiently. Here we can
understand why paging is needed as a Memory Management technique:

 Memory isn’t always available in a single block: Programs often


need more memory than what is available in a single continuous block.
Paging breaks memory into smaller, fixed-size pieces, making it easier
to allocate scattered free spaces.

 Processes size can increase or decrease: programs don’t need to


occupy continuous memory, so they can grow dynamically without the
need to be moved.

Important Features of Paging

 Logical to physical address mapping: Paging divides a process's


logical address space into fixed-size pages. Each page maps to a frame
in physical memory, enabling flexible memory management.

Ankita Sharma
Roll no.23022C04012 Operating System Page 19
 Fixed page and frame size: Pages and frames have the same fixed
size. This simplifies memory management and improves system
performance.

 Page table entries: Each logical page is represented by a page table


entry (PTE). A PTE stores the corresponding frame number and control
bits.

 Number of page table entries: The page table has one entry per
logical page. Thus, its size equals the number of pages in the process's
address space.

 Page table stored in main memory: The page table is kept in


main memory. This can add overhead when processes are swapped in
or out.

Advantages of Paging

 Eliminates External Fragmentation: Paging divides memory into


fixed-size blocks (pages and frames), so processes can be loaded
wherever there is free space in memory. This prevents wasted space
due to fragmentation.

 Efficient Memory Utilization: Since pages can be placed in non-


contiguous memory locations, even small free spaces can be utilized,
leading to better memory allocation.

Ankita Sharma
Roll no.23022C04012 Operating System Page 20
 Supports Virtual Memory: Paging enables the implementation of
virtual memory, allowing processes to use more memory than
physically available by swapping pages between RAM and secondary
storage.

 Ease of Swapping: Individual pages can be moved between physical


memory and disk (swap space) without affecting the entire process,
making swapping faster and more efficient.

 Improved Security and Isolation: Each process works within its


own set of pages, preventing one process from accessing another's
memory space.

Disadvantages of Paging

 Internal Fragmentation: If the size of a process is not a perfect


multiple of the page size, the unused space in the last page results in
internal fragmentation.

 Increased Overhead: Maintaining the Page Table requires


additional memory and processing. For large processes, the page table
can grow significantly, consuming valuable memory resources.

 Page Table Lookup Time: Accessing memory requires translating


logical addresses to physical addresses using the page table. This
additional step increases memory access time, although Translation
Lookaside Buffers (TLBs) can help reduce the impact.

 I/O Overhead During Page Faults: When a required page is not in


physical memory (page fault), it needs to be fetched from secondary
storage, causing delays and increased I/O operations.
Ankita Sharma
Roll no.23022C04012 Operating System Page 21
Practical :- 5

 virtual memory in detail.


:- Virtual memory is a memory management technique used by operating
systems to give the appearance of a large, continuous block of memory to
applications, even if the physical memory (RAM) is limited. It allows
larger applications to run on systems with less RAM.
 The main objective of virtual memory is to support multiprogramming,

The main advantage that virtual memory provides is, a running process
does not need to be entirely in memory.
 Programs can be larger than the available physical memory. Virtual

Memory provides an abstraction of main memory, eliminating concerns


about storage limitations.
 A memory hierarchy, consisting of a computer system's memory and a

disk, enables a process to operate with only some portions of its


address space in RAM to allow more processes to be in memory.

Working

Virtual Memory is a technique that is implemented using both hardware and


software. It maps memory addresses used by a program, called virtual
addresses, into physical addresses in computer memory.

 All memory references within a process are logical addresses that are
dynamically translated into physical addresses at run time. This means
that a process can be swapped in and out of the main memory such that
it occupies different places in the main memory at different times
during the course of execution.

Ankita Sharma
Roll no.23022C04012 Operating System Page 22
 A process may be broken into a number of pieces and these pieces need
not be continuously located in the main memory during execution. The
combination of dynamic run-time address translation and the use of a
page or segment table permits this.

Types of Virtual Memory

In a computer, virtual memory is managed by the Memory Management Unit


(MMU), which is often built into the CPU. The CPU generates virtual
addresses that the MMU translates into physical addresses.
There are two main types of virtual memory:

 Paging
 Segmentation

Paging:-

Paging divides memory into small fixed-size blocks called pages. When the
computer runs out of RAM, pages that aren't currently in use are moved to
the hard drive, into an area called a swap file. The swap file acts as an
extension of RAM. When a page is needed again, it is swapped back into
RAM, a process known as page swapping.

Segmentation:-

Segmentation divides virtual memory into segments of different sizes.


Segments that aren't currently needed can be moved to the hard drive.
The system uses a segment table to keep track of each segment's status,
including whether it's in memory, if it's been modified, and its physical
address. Segments are mapped into a process's address space only when
needed.
Ankita Sharma
Roll no.23022C04012 Operating System Page 23
Advantages of Virtual Memory

 Many processes maintained in the main memory.


 A process larger than the main memory can be executed because of
demand paging. The OS itself loads pages of a process in the main
memory as required.
 It allows greater multiprogramming levels by using less of the available
(primary) memory for each process.
 It has twice the capacity for addresses as main memory.
 It makes it possible to run more applications at once.
 Users are spared from having to add memory modules when RAM
space runs out, and applications are liberated from shared memory
management.
 When only a portion of a program is required for execution, speed has
increased.
 Memory isolation has increased security.

Conclusion

In conclusion, virtual memory is a crucial feature in operating


systems that allows computers to run larger applications and
handle more processes than the physical RAM alone can support.
By using techniques like paging and segmentation, the system
extends the available memory onto the hard drive, ensuring that
the operating system and applications can operate smoothly.
Although virtual memory can introduce some performance
overhead due to the slower speed of hard drives compared
to RAM, it provides significant benefits in terms of memory
management, efficiency, and multitasking capabilities.

Ankita Sharma
Roll no.23022C04012 Operating System Page 24
Practical:-6

6.File management

:- File systems are a crucial part of any operating system, providing a


structured way to store, organize, and manage data on storage devices
such as hard drives, SSDs, and USB drives. Essentially, a file system acts
as a bridge between the operating system and the physical storage
hardware, allowing users and applications to create, read, update, and
delete files in an organized and efficient manner.
File management within an operating system (OS) involves the processes
and tools used to handle data storage, retrieval, and organization. It
encompasses creating, storing, organizing, and manipulating files,
ensuring they are easily accessible and secure. This includes managing
file hierarchies, permissions, and efficient data access.

Files is a collection of co-related information that is recorded in some


format (such as text, pdf, docs, etc.) and is stored on various storage
mediums such as flash drives, hard disk drives (HDD), magnetic tapes,
optical disks, and tapes, etc. Files can be read-only or read-write. Files are
simply used as a medium for providing input(s) and getting output(s).
Now, an Operating System is nothing but a software program that acts as
an interface between the hardware, the application software, and the users.
The main aim of an operating system is to manage all the computer
resources. So, we can simply say that the operating system gives a
platform to the application software and other system software to perform
their task.

Ankita Sharma
Roll no.23022C04012 Operating System Page 25
A file system is a method an operating system uses to store, organize, and
manage files and directories on a storage device. Some common types of
file systems include:

 FAT (File Allocation Table): An older file system used by older


versions of Windows and other operating systems.

 NTFS (New Technology File System): A modern file system used by


Windows. It supports features such as file and folder permissions,
compression, and encryption.

 ext (Extended File System): A file system commonly used


on Linux and Unix-based operating systems.

 HFS (Hierarchical File System): A file system used by macOS.

 APFS (Apple File System): A new file system introduced by Apple


for their Macs and iOS devices.

Advantages of File System

 Organization: A file system allows files to be organized into


directories and subdirectories, making it easier to manage and locate
files.

 Data Protection: File systems often include features such as file and
folder permissions, backup and restore, and error detection and
correction, to protect data from loss or corruption.

Ankita Sharma
Roll no.23022C04012 Operating System Page 26
 Improved Performance: A well-designed file system can improve
the performance of reading and writing data by organizing it efficiently
on disk.

Disadvantages of File System

 Compatibility Issues: Different file systems may not be compatible


with each other, making it difficult to transfer data between different
operating systems.

 Disk Space Overhead: File systems may use some disk space to store
metadata and other overhead information, reducing the amount of space
available for user data.

 Vulnerability: File systems can be vulnerable to data


corruption, malware, and other security threats, which can compromise the
stability and security of the system.

Conclusion

In conclusion, file systems are essential components of operating systems


that manage how data is stored, organized, and accessed on storage
devices. They provide the structure and rules necessary for creating,
managing, and protecting files and directories. By ensuring efficient
storage management, easy file navigation, and robust security measures,
file systems contribute to the overall functionality, performance, and
reliability of computer systems.

Ankita Sharma
Roll no.23022C04012 Operating System Page 27
Practical:-7

 Disk structure in detail.

:- Disk management is one of the critical operations carried out by


the operating system. It deals with organizing the data stored on
the secondary storage devices which includes the hard disk drives and the
solid-state drives. It also carries out the function of optimizing the data
and making sure that the data is safe by implementing various disk
management techniques. We will learn more about disk management and
its related techniques found in operating system.

The range of services and add-ons provided by modern operating systems


is constantly expanding, and four basic operating system management
functions are implemented by all operating systems. These management
functions are briefly described below and given the following overall
context. The four main operating system management functions (each of
which are dealt with in more detail in different places) are:

 Process Management
 Memory Management
 File and Disk Management
 I/O System Management

There is no guarantee that files will be stored in contiguous locations on


physical disk drives, especially large files. It depends greatly on the
amount of space available. When the disc is full, new files are more
likely to be recorded in multiple locations. However, as far as the user is
concerned, the example file provided by the operating system hides the
fact that the file is fragmented into multiple parts.

Ankita Sharma
Roll no.23022C04012 Operating System Page 28
Advantages of disk management include:

1. Improved organization and management of data.


2. Efficient use of available storage space.
3. Improved data integrity and security.
4. Improved performance through techniques such as defragmentation.

Disadvantages of disk management include:

1. Increased system overhead due to disk management tasks.


2. Increased complexity in managing multiple partitions and file systems.
3. Increased risk of data loss due to errors during disk management tasks.
4. Overall, disk management is an essential aspect of operating system
management and can greatly improve system performance and data
integrity when implemented properly.

Conclusion

Disk management is a pivotal part of the operating system which


includes managing and optimizing the storage resources. It paves
a way for us to organize the way our data is stored on the system.
Disk management also facilitates data integrity, which is very
important in today's time. All the techniques supported by the disk
management helps us in enhancing the overall performance of the
operating system.

Ankita Sharma
Roll no.23022C04012 Operating System Page 29
Practical:- 8

8.Define RAID in OS.

:- RAID (Redundant Arrays of Independent Disks) is a technique that


makes use of a combination of multiple disks for storing the data instead
of using a single disk for increased performance, data redundancy, or to
protect data in the case of a drive failure.

RAID (Redundant Array of Independent Disks) is like having backup


copies of your important files stored in different places on several hard
drives or solid-state drives (SSDs). If one drive stops working, your data
is still safe because you have other copies stored on the other drives. It's
like having a safety net to protect your files from being lost if one of your
drives breaks down.

RAID (Redundant Array of Independent Disks) in a Database


Management System (DBMS) is a technology that combines multiple
physical disk drives into a single logical unit for data storage. The main
purpose of RAID is to improve data reliability, availability, and
performance. There are different levels of RAID, each offering a balance
of these benefits.

Ankita Sharma
Roll no.23022C04012 Operating System Page 30
RAID Controller

A RAID controller is like a boss for your hard drives in a big storage
system. It works between your computer's operating system and the
actual hard drives, organizing them into groups to make them easier to
manage. This helps speed up how fast your computer can read and write
data, and it also adds a layer of protection in case one of your hard drives
breaks down. So, it's like having a smart helper that makes your hard
drives work better and keeps your important data safer.

Types of RAID Controller


There are three types of RAID controller:

Hardware Based: In hardware-based RAID, there's a physical


controller that manages the whole array. This controller can handle the
whole group of hard drives together. It's designed to work with different
types of hard drives, like SATA (Serial Advanced Technology
Attachment) or SCSI (Small Computer System Interface).

Software Based: In software-based RAID, the controller doesn't have


its own special hardware. So it use computer's main processor and
memory to do its job. It perform the same function as a hardware-based
RAID controller, like managing the hard drives and keeping your data
safe.

Firmware Based: Firmware-based RAID controllers are like helpers


built into the computer's main board. They work with the main processor,
just like software-based RAID. But they only implement when the
computer starts up. Once the operating system is running, a special driver
takes over the RAID job.

Ankita Sharma
Roll no.23022C04012 Operating System Page 31
RAID is very transparent to the underlying system. This means, that to
the host system, it appears as a single big disk presenting itself as a linear
array of blocks. This allows older technologies to be replaced by RAID
without making too many changes to the existing code.

Different RAID Levels


 RAID-0 (Stripping)

 RAID-1 (Mirroring)

 RAID-2 (Bit-Level Stripping with Dedicated Parity)

 RAID-3 (Byte-Level Stripping with Dedicated Parity)

 RAID-4 (Block-Level Stripping with Dedicated Parity)

 RAID-5 (Block-Level Stripping with Distributed Parity)

 RAID-6 (Block-Level Stripping with two Parity Bits)

RAID-0 (Stripping)

 RAID-0 improves system performance by splitting data into smaller


"blocks" and spreading them across multiple disks. This process is
called "striping." It enhances data access speed by enabling parallel
read/write operations but provides no redundancy or fault tolerance.

RAID-1 (Mirroring)

 RAID-1 enhances reliability by creating an identical copy (mirror) of


each data block on separate disks. This ensures that even if one disk
fails, the data remains accessible from its duplicate. While this
configuration is highly reliable, it requires significant storage overhead.

Ankita Sharma
Roll no.23022C04012 Operating System Page 32
RAID-2 (Bit-Level Stripping with Dedicated Parity)

 RAID-2 is a specialized RAID level that uses bit-level striping


combined with error correction using Hamming Code. In this
configuration, data is distributed at the bit level across multiple drives,
and a dedicated parity drive is used for error detection and correction.
While it offers strong fault tolerance, its complexity and cost make it
rarely used in practice.

RAID-3 (Byte-Level Stripping with Dedicated Parity)

 RAID-3 enhances fault tolerance by employing byte-level striping


across multiple drives and storing parity information on a dedicated
parity drive. The dedicated parity drive allows for the reconstruction of
lost data if a single drive fails. This configuration is suitable for
workloads requiring high throughput for sequential data but is less
efficient for random I/O operations.

RAID-4 (Block-Level Stripping with Dedicated Parity)

 RAID-4 introduces block-level striping across multiple disks,


combined with a dedicated parity disk to provide fault tolerance. Data
is written in blocks, and a separate disk stores parity information
calculated using the XOR function. This setup allows for data recovery
in case of a single disk failure, making RAID-4 more reliable than
RAID-0 but less efficient in write-intensive scenarios due to reliance
on a dedicated parity disk.

Ankita Sharma
Roll no.23022C04012 Operating System Page 33
RAID-5 (Block-Level Stripping with Distributed Parity)

 RAID-5 builds on RAID-4 by distributing parity information across all


disks instead of storing it on a dedicated parity drive. This distributed
parity significantly improves write performance, especially for random
write operations, while maintaining fault tolerance for single disk
failures. RAID-5 is one of the most commonly used RAID
configurations due to its balance between reliability, performance, and
storage efficiency.

RAID-6 (Block-Level Stripping with two Parity Bits)

 RAID-6 is an advanced version of RAID-5 that provides enhanced fault


tolerance by introducing double distributed parity. This allows RAID-6
to recover from the failure of up to two disks simultaneously, making it
more reliable for critical systems with larger arrays. However, the
added parity calculations can impact write performance.

Conclusion

In Conclusion, RAID technology in database management


systems distributes and replicates data across several drives to
improve data performance and reliability. It is a useful tool in
contemporary database setups since it is essential to preserving
system availability and protecting sensitive data.

Ankita Sharma
Roll no.23022C04012 Operating System Page 34

You might also like