Operating System
Operating System
Practical:- 1
3. Manages memory and Files: It manages the computer’s main memory and
second storage. Additionally, it allows and deallocates memory to all tasks and
applications.
Ankita Sharma
Roll no.23022C04012 Operating System Page 1
4. Provides Security: It helps to maintain the system and applications
safe through the authorization process. Thus, the OS provides security to
the system.
Ankita Sharma
Roll no.23022C04012 Operating System Page 2
Diagram:-
Hardware:-
Kernel:-
In an operating system, a kernel is the core program that manages all other
software and hardware components. It serves as the intermediary between
applications and the underlying hardware, handling low-level tasks like
memory management, process management, and device management.
System Software:-
In an operating system (OS), system software refers to the programs that
manage and control the hardware and software resources of a computer
system. It acts as an interface between the user, application software, and
the underlying hardware.
Ankita Sharma
Roll no.23022C04012 Operating System Page 3
Outermost layer (User layer) :-
Conclusion:-
Ankita Sharma
Roll no.23022C04012 Operating System Page 4
Practical:-2
:- Process Control Block is a data structure that contains information of the process
related to it. The process control block is also known as a task control block, entry
of the process table, etc. It is very important for process management as the data
structuring for processes is done in terms of the PCB. It also defines the current
state of the operating system.
The process control block (PCB) is used to track the process’s execution status.
Each block of memory contains information about the process state, program
counter, stack pointer, status of opened files, scheduling algorithms, etc.
When the process makes a transition from one state to another, the operating
system must update information in the process’s PCB. A Process Control Block
(PCB) contains information about the process, i.e. registers, quantum, priority, etc.
The Process Table is an array of PCBs, which logically contains a PCB for all of
the current processes in the system.
PCBs are very important for process management for almost all process related
activities. They are accessed and/or updated by many utility programs, like
schedulers and resource managers. As PCBs track process state information, they
play a vital role in context switching.
It acts as an "ID card" for a process, containing details necessary for the OS to
manage its execution, including process state, memory usage, and resource
allocation.
Ankita Sharma
Roll no.23022C04012 Operating System Page 5
A Process Control Block (PCB) is a data structure used by the operating system to
manage information about a process. The process control keeps track of many
important pieces of information needed to manage processes efficiently. The
diagram helps explain some of these key data items.
Ankita Sharma
Roll no.23022C04012 Operating System Page 6
Process state: It stores the respective state of the process. The process
state in the Process Control Block (PCB) defines the current status of a
process, indicating what it is doing. The PCB, a data structure, stores all
the essential information about a process, including its state. This process
state is critical for managing processes and scheduling them efficiently.
When a process runs, it modifies the state of the system. The current
activity of a given process determines the state of the process in general.
Ankita Sharma
Roll no.23022C04012 Operating System Page 7
Memory limits: This field contains the information about memory
management system used by the operating system. This may include page
tables, segment tables, etc. The "memory limit" field, also known as
memory management information, stores details about how much memory
a process is allowed to use and how it's allocated. This includes
information like base and limit registers, page tables, or segment tables,
depending on the memory management system used by the operating
system.
List of Open files: This information includes the list of files opened for a
process. The list of open files within a Process Control Block (PCB) in an
operating system is a crucial component for managing a process's
interaction with the file system. This list maintains information about all
files currently accessed by that specific process.
Process Management:
PCBs store vital information about each process, such as its state, memory usage,
and resource allocations, allowing the OS to track and manage them efficiently.
Context Switching:
PCBs facilitate quick switching between processes by saving and loading the
current state of a running process, enabling seamless multitasking.
Scheduling:
PCBs store information like process priorities and execution history, which is
crucial for the OS to make informed scheduling decisions.
Ankita Sharma
Roll no.23022C04012 Operating System Page 8
Resource Management:
PCBs keep track of the resources a process holds or needs, enabling efficient
allocation and deallocation of system resources.
Inter-Process Communication:
PCBs can include fields for managing communication between processes, such as
message queues and semaphores.
Memory Overhead:
Each PCB consumes memory, and with numerous processes, the cumulative
memory usage can be significant, potentially affecting system performance.
Complexity:
Using PCBs introduces complexity to the OS, making it more challenging to
develop and maintain.
Reduced Scalability:
The complexity of managing processes with PCBs can limit the scalability of the
OS, especially when dealing with a large number of processes.
Performance Overhead:
The OS needs to maintain and manage the PCB for each process, which can
introduce overhead and potentially reduce overall system performance.
Ankita Sharma
Roll no.23022C04012 Operating System Page 9
Practical:-3
Process States:-
Understanding CPU scheduling begins with understanding process states:
1. New – Process is being created.
2. Ready – Process is ready to run and waiting for CPU.
3. Running – CPU is executing the process instructions.
4. Waiting/Blocked – Process is waiting for some I/O operation.
5. Terminated – Process has finished execution.
Ankita Sharma
Roll no.23022C04012 Operating System Page 10
Terminologies Used in CPU Scheduling:-
Arrival Time: The time at which the process arrives in the ready
queue. Arrival time refers to the time when a process enters the
ready queue and becomes eligible for execution. It's a key factor in
CPU scheduling algorithms, influencing how processes are
prioritized and selected for execution
Burst Time: Time required by a process for CPU execution. Burst time
refers to the amount of time a process requires to execute on the CPU. It
represents the CPU time needed for a process to complete its execution,
excluding any I/O time.
Ankita Sharma
Roll no.23022C04012 Operating System Page 11
Non-Preemptive Scheduling: Non-Preemptive scheduling is used
when a process terminates , or when a process switches from running
state to waiting state.
Ankita Sharma
Roll no.23022C04012 Operating System Page 12
Advantages of FCFS
The simplest and basic form of CPU Scheduling algorithm
Every process gets a chance to execute in the order of its arrival. This
ensures that no process is arbitrarily prioritized over another.
Easy to implement, it doesn't require complex data structures.
Then select that process that has minimum arrival time and minimum
Burst time.
Ankita Sharma
Roll no.23022C04012 Operating System Page 13
Advantages of SJF Scheduling
Ankita Sharma
Roll no.23022C04012 Operating System Page 14
4. Round Robin Scheduling
:-Round Robin Scheduling is a method used by operating systems to
manage the execution time of multiple processes that are competing for
CPU attention. It is called "round robin" because the system rotates
through all the processes, allocating each of them a fixed time slice or
"quantum", regardless of their priority.
The primary goal of this scheduling method is to ensure that all processes are
given an equal opportunity to execute, promoting fairness among tasks.
Ankita Sharma
Roll no.23022C04012 Operating System Page 15
5. Priority scheduling
:- Priority
scheduling is one of the most common scheduling algorithms
used by the operating system to schedule processes based on their
priority. Each process is assigned a priority value based on criteria such
as memory requirements, time requirements, other resource needs, or the
ratio of average I/O to average CPU burst time. The process with the
highest priority is selected for execution first. If there are multiple
processes sharing the same priority, they are scheduled in the order they
arrived, following a First-Come, First-Served approach.
Ankita Sharma
Roll no.23022C04012 Operating System Page 16
6. Highest Response Ratio Next (HRRN) CPU Scheduling
Ankita Sharma
Roll no.23022C04012 Operating System Page 17
7. Multilevel Feedback Queue Scheduling (MLFQ)
:- Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is
like Multilevel Queue(MLQ) Scheduling but in this process can move
between the queues. And thus, much more efficient than multilevel queue
scheduling.
It is more flexible.
It prevents starvation by moving a process that waits too long for the
lower priority queue to the higher priority queue.
Ankita Sharma
Roll no.23022C04012 Operating System Page 18
Practical:- 4
Pagging in detail
:- Paging is the process of moving parts of a program, called pages, from
secondary storage (like a hard drive) into the main memory (RAM). The
main idea behind paging is to break a program into smaller fixed-size
blocks called pages.
Paging is a memory management technique that addresses common
challenges in allocating and managing memory efficiently. Here we can
understand why paging is needed as a Memory Management technique:
Ankita Sharma
Roll no.23022C04012 Operating System Page 19
Fixed page and frame size: Pages and frames have the same fixed
size. This simplifies memory management and improves system
performance.
Number of page table entries: The page table has one entry per
logical page. Thus, its size equals the number of pages in the process's
address space.
Advantages of Paging
Ankita Sharma
Roll no.23022C04012 Operating System Page 20
Supports Virtual Memory: Paging enables the implementation of
virtual memory, allowing processes to use more memory than
physically available by swapping pages between RAM and secondary
storage.
Disadvantages of Paging
The main advantage that virtual memory provides is, a running process
does not need to be entirely in memory.
Programs can be larger than the available physical memory. Virtual
Working
All memory references within a process are logical addresses that are
dynamically translated into physical addresses at run time. This means
that a process can be swapped in and out of the main memory such that
it occupies different places in the main memory at different times
during the course of execution.
Ankita Sharma
Roll no.23022C04012 Operating System Page 22
A process may be broken into a number of pieces and these pieces need
not be continuously located in the main memory during execution. The
combination of dynamic run-time address translation and the use of a
page or segment table permits this.
Paging
Segmentation
Paging:-
Paging divides memory into small fixed-size blocks called pages. When the
computer runs out of RAM, pages that aren't currently in use are moved to
the hard drive, into an area called a swap file. The swap file acts as an
extension of RAM. When a page is needed again, it is swapped back into
RAM, a process known as page swapping.
Segmentation:-
Conclusion
Ankita Sharma
Roll no.23022C04012 Operating System Page 24
Practical:-6
6.File management
Ankita Sharma
Roll no.23022C04012 Operating System Page 25
A file system is a method an operating system uses to store, organize, and
manage files and directories on a storage device. Some common types of
file systems include:
Data Protection: File systems often include features such as file and
folder permissions, backup and restore, and error detection and
correction, to protect data from loss or corruption.
Ankita Sharma
Roll no.23022C04012 Operating System Page 26
Improved Performance: A well-designed file system can improve
the performance of reading and writing data by organizing it efficiently
on disk.
Disk Space Overhead: File systems may use some disk space to store
metadata and other overhead information, reducing the amount of space
available for user data.
Conclusion
Ankita Sharma
Roll no.23022C04012 Operating System Page 27
Practical:-7
Process Management
Memory Management
File and Disk Management
I/O System Management
Ankita Sharma
Roll no.23022C04012 Operating System Page 28
Advantages of disk management include:
Conclusion
Ankita Sharma
Roll no.23022C04012 Operating System Page 29
Practical:- 8
Ankita Sharma
Roll no.23022C04012 Operating System Page 30
RAID Controller
A RAID controller is like a boss for your hard drives in a big storage
system. It works between your computer's operating system and the
actual hard drives, organizing them into groups to make them easier to
manage. This helps speed up how fast your computer can read and write
data, and it also adds a layer of protection in case one of your hard drives
breaks down. So, it's like having a smart helper that makes your hard
drives work better and keeps your important data safer.
Ankita Sharma
Roll no.23022C04012 Operating System Page 31
RAID is very transparent to the underlying system. This means, that to
the host system, it appears as a single big disk presenting itself as a linear
array of blocks. This allows older technologies to be replaced by RAID
without making too many changes to the existing code.
RAID-1 (Mirroring)
RAID-0 (Stripping)
RAID-1 (Mirroring)
Ankita Sharma
Roll no.23022C04012 Operating System Page 32
RAID-2 (Bit-Level Stripping with Dedicated Parity)
Ankita Sharma
Roll no.23022C04012 Operating System Page 33
RAID-5 (Block-Level Stripping with Distributed Parity)
Conclusion
Ankita Sharma
Roll no.23022C04012 Operating System Page 34