Operating Systems
Operating Systems
An Operating System acts as a communication bridge (interface) between the user and computer hardware. The
purpose of an operating system is to provide a platform on which a user can execute programs conveniently and
efficiently.
The main task an operating system carries out is the allocation of resources and services, such as the allocation
of memory, devices, processors, and information. The operating system also includes programs to manage these
resources, such as a traffic controller, a scheduler, a memory management module, I/O programs, and a file
system. The operating system simply provides an environment within which other programs can do useful work.
Booting - an OS being loaded into main memory
History of OS
No OS(1940 - 1950)
● Had to manually type instructions
● No operating system
● User had to directly interact with the hardware
● Programs directly loaded into the computer
● Not user friendly
● Deep understanding of machine language was needed
● Machines run from console with display lights, toggle switches
● Uniprogramming
● Processor sat idle when loading programs and doing I/O
Simple batch system
● Maximized the processor utilization
● Programs recorded in a magnetic tape with an inexpensive
machine
● Os loaded and executed programs in tape one at a time
● Users had facility to write their programs on punch cards and
load it to the computer operator.
● When the current program ended execution, its output was written
to another tape and os loaded next program
● At the end of the entire batch programs, output tape was printed
with an inexpensive machine
● No direct access to hardware
● Uniprogramming
● High response time
● Processor sat idle during I/O
Multiprogrammed batch systems
● Central theme of os
● Introduced 3rd gen to minimize the processor idle time during I/O
● Memory is partitioned to hold multiple programmes
● When current program is waiting for I/O, OS switches processor to
execute another program in memory(which is first in ready queue)
● If memory is large enough to hold more programs, processor
could keep 100% busy.
Time sharing systems
● Memory management
● File management
● Processor management
● Device management
● User interface or command interpreter
● Booting the computer
● Security
● Control over system performance
● Job accounting
● Error detecting
● Coordinating between other users and softwares
● Network management
SOFTWARE
Utility software
Anti virus softwares, device
Off the shelf Bespoke
manager, file managements
tools, file compression tools,
disk management tools
OS
Designed and built on demand w/ a
macOS, iOS,
Windows, Ubuntu,
Buy w/o customizing specific purpose in mind
Language translators Linux
compiler , interpreter
- Python interpreter
A directory is a container that is used to contain folders and files. It organizes files
and folders in a hierarchical manner.
Tree structure/hierarchical
Single level directory Two level directory structure
Features:
● Disk space overhead: File systems may use some disk space
to store metadata and other overhead information, reducing
the amount of space available for user data.
● Encryption
● Use of passwords
● authentication
● Access control
● Backup and recovery
● Network security
● Physical security
Authentication methods
● Passwords
● Biometrics (BEHAVIOURAL BIOMETRICS : keystroke, voice
recognition, signature recognition. PHYSIOLOGICAL BIOMETRICS :
eye retina pattern, facial recognition, fingerprint recognition)
● Two factor authentication
● Multi factor authentication
Disk Fragmentation
Fragmentation is the unintentional division of Disk into many small free areas that
cannot be used effectively due to scattered storage of file fragments.
Fragmentation of disk means allocating data in non-sequence form. Usually, data is stored in
hard drive in sequence form and data header is keep following incoming data so that it is easy
to read data efficiently but when we delete some older data from this sequence. Sequence of
data management is disturbed and data is looking like scatter form and also when we update
older data with a bigger size of data Operating System split entire data into small packets and
store data in different locations of storage area.
Fragmentation can also occur at various levels within a system. File fragmentation, for
example, can occur at the file system level, in which a file is divided into multiple
non-contiguous blocks and stored on a storage medium. Memory fragmentation can occur at
the memory management level, where the system allocates and deallocated memory blocks
dynamically. Network fragmentation occurs when a packet of data is divided into smaller
fragments for transmission over a network.
Disadvantages of fragmentation
1. Read and Write time of data in disk increase because the data header has sections to move in
different storage section of disk.
2. Efficiency and performance of disk decreases.
3. Usage of disk can be reached to 100 percent.
4. Acquire more storage section results decrease volume of disk.
5. Health of hard disk may be affected due to Fragmentation. - Slower performance
6. Slow boot-up time of Machine. - Disk space wasting
7. Errors in and conflict between applications. - Data loss
- Increased risk of system
crashes
- Reduced battery life
Types of fragmentation
1. Internal fragmentation
Block 1
Block 2
2. External fragmentation
External fragmentation occurs when a storage medium, such as a hard disc or solid-state
drive, has many small blocks of free space scattered throughout it. This can happen when
a system creates and deletes files frequently, leaving many small blocks of free space on
the medium. When a system needs to store a new file, it may be unable to find a single
contiguous block of free space large enough to store the file and must instead store the
file in multiple smaller blocks. This can cause external fragmentation and performance
problems when accessing the file.
Data Fragmentation
In data fragmentation, data is stored in non-sequential form
Disk Defragmentation
Fragmented Defragmented
disk disk
Defragmentation
● No external fragmentation. Files can grow easily. Many seek are required to
access file data Example:
MSDOS FAT file system
Disadvantages
● A bad index block could cause the lost of entire file.
● Size of a file depends upon the number of pointers, a index block can hold.
● Having an index block for a small file is totally wastage.
For very small files, say files that expand only 2-3 blocks, the indexed
allocation would keep one entire block (index block) for the pointers
which is inefficient in terms of memory utilization. However, in linked
allocation we lose the space of only 1 pointer per block.
● More pointer overhead
Maintenance of secondary storage
Disk formatting
Formatting is the process of preparing a data storage device for initial use which may
also create one or more new file systems.
Disk formatting is a process to configure the data-storage devices such as hard-drive, floppy disk and
flash drive when we are going to use them for the very first time or we can say initial usage. Disk
formatting is usually required when new operating system is going to be used by the user. It is also done
when there is space issue and we require additional space for the storage of more data in the drives.
When we format the disk then the existing files within the disk is also erased. When we are going to use
hard-drive for initial use it is going to search for virus. It can scan for virus and repair the bad sectors
within the drive. Disk formatting has also the capability to erase the bad applications and various
sophisticated viruses. As we know that disk formatting deletes data and removes all the programs
installed with in the drive. So it can be done with caution. We must have the backup of all the data and
applications which we require. No-doubt disk formatting requires time. But the frequent formatting of the
disk decreases the life of the hard-drive.
As suggesting from the name, partitioning means
divisions. Partitioning is the process of dividing
the hard-disk into one or more regions. The
regions are called as partitions.
● As file deletion is done by the operating system, data on a disk are not fully erased
during every high-level format. Instead, links to the files are deleted and the area on
the disk containing the data is retains until it is overwritten.
● It's possible to recover data from a formatted hard drive after a quick format.
However, it's not possible to recover data after a full format. To maximize the chances
of recovering data, avoid writing new files to the formatted drive. You can also use a
trustworthy data recovery program
● Recovering data from a formatted disk can be challenging, but it's not always
impossible. When a disk is formatted, the file system structures are generally deleted,
but the actual data may still be present on the disk until new data overwrites it
● The success of data recovery depends on various factors, including the extent of
overwriting, the type of formatting, and the actions taken after formatting. It's
essential to act quickly and cautiously to maximize the chances of successful data
recovery.
Professional Data Recovery Services:
steps you can take to attempt data recovery: If the data is critical and you are unable to recover it using
software, you may need to consider professional data recovery
Stop Using the Disk: services. These services have specialized equipment and
Immediately stop using the formatted disk to prevent expertise to recover data in more challenging situations.
further overwriting of data. Any new data written to the However, they can be expensive.
disk may overwrite the sectors where your old data is still Restore from Backup:
present.
If you have a backup of the data, this is the most
Use Data Recovery Software:
straightforward solution. Regular backups are essential for
There are various data recovery software tools available
that can help you recover data from a formatted disk. preventing data loss in various scenarios, including accidental
Examples include Recuva, EaseUS Data Recovery Wizard, formatting.
Disk Drill, and TestDisk. These tools scan the disk for Check for Hidden Partitions:
recoverable files and attempt to restore them. In some cases, formatting may create a new file system while
Bootable Recovery Tools: leaving the old data intact in a hidden partition. Check if there
Some data recovery tools offer bootable versions that you are any hidden partitions on the disk that may contain your
can run from a USB drive or CD/DVD. This allows you to data.
use the tool without accessing the formatted disk's
Avoid Writing to the Disk:
operating system, minimizing the risk of overwriting data.
Minimize any activity on the formatted disk. Do not install new
software, save files, or perform any other actions that could
write data to the disk.
Learn from the Experience:
Consider implementing a backup strategy to prevent future
data loss. Regularly back up your important data to an external
drive, cloud storage, or another secure location.
!!Compaction!!: The compaction process typically involves moving allocated memory blocks to fill gaps and create larger contiguous free
memory blocks.
Compaction is a process in which the free space is collected in a large memory chunk to make
some space available for processes. In memory management, swapping creates multiple
fragments in the memory because of the processes moving in and out. Compaction refers to
combining all the empty spaces together and processes.
Process management can help organizations improve their operational efficiency, reduce costs, increase
customer satisfaction, and maintain compliance with regulatory requirements. It involves analyzing the
performance of existing processes, identifying bottlenecks, and making changes to optimize the process
flow.
Process management includes various tools and techniques such as process mapping, process analysis,
process improvement, process automation, and process control. By applying these tools and techniques,
organizations can streamline their processes, eliminate waste, and improve productivity.
Types of processes
If the operating system supports multiple users then services under this are very important. In this regard, operating systems
have to keep track of all the completed processes, Schedule them, and dispatch them one after another. But the user should
feel that he has full control of the CPU.
1. Create a child’s process identical to the parent’s. Operating System does the
2. Terminate a process following activities for processor
managements:
3. Wait for a child process to terminate
4. Change the priority of the process • Keeps tracks of processor and status of
process. The program responsible for this task
5. Block the process is known as traffic controller.
6. Ready the process • Allocates the processor (CPU) to a process.
7. Dispatch a process
• De-allocates processor when a process is no
8. Suspend a process longer required
9. Resume a process
10. Delay a process
11. Fork a process
Attributes or Characteristics of a Process
attributes of a process are also known as the context of the process. Every process has its own process
control block(PCB), i.e. each process will have a unique PCB. All of the above attributes are part of the
PCB. A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID).
Context switching is the process of saving the current state of a running process and loading the state of
another process so that the CPU can switch its execution from one process to another. The process control
block (PCB) plays a key role in context switching because it contains all relevant information about the
process. When the operating system decides to switch to another process, it stores the current process in the
circuit’s memory, including CPU registers and program counters. It then loads the chip to start the next
process, resets its state and resumes execution from where it left off. This seamless switching between
processes allows the operating system to create the illusion of simultaneous execution, even though the
processor can only run one process at a time.
Additional Points to Consider for Process Control Block (PCB)
● Interrupt handling: The PCB also contains information about the interrupts that a process may
have generated and how they were handled by the operating system.
● Context switching: The process of switching from one process to another is called context
switching. The PCB plays a crucial role in context switching by saving the state of the current
process and restoring the state of the next process.
● Real-time systems: Real-time operating systems may require additional information in the PCB,
such as deadlines and priorities, to ensure that time-critical processes are executed in a timely
manner.
● Virtual memory management: The PCB may contain information about a process’s virtual
memory management, such as page tables and page fault handling.
● Inter-process communication: The PCB can be used to facilitate inter-process communication by
storing information about shared resources and communication channels between processes.
● Fault tolerance: Some operating systems may use multiple copies of the PCB to provide fault
tolerance in case of hardware failures or software errors.
Context Switching:
The process of saving the context of one process and loading the context of another
process is known as Context Switching. In simple terms, it is like loading and unloading
the process from the running state to the ready state.
1. When a high-priority process comes to a ready state (i.e. with higher priority than the
running process)
2. An Interrupt occurs
3. User and kernel-mode switch (It is not necessary though)
4. Preemptive CPU scheduling is used.
A context switch is the mechanism to store and restore the state or context
of a CPU in Process Control block so that a process execution can be
resumed from the same point at a later time.
The operating system can use different scheduling algorithms to schedule processes.
This is the simplest scheduling algorithm, where the process is executed on a first-come, first-served basis. FCFS
is non-preemptive, which means that once a process starts executing, it continues until it is finished or waiting
for I/O.
2. Shortest Job First (SJF):
SJF is a proactive scheduling algorithm that selects the process with the shortest burst time. The burst time is
the time a process takes to complete its execution. SJF minimizes the average waiting time of processes.
3. Round Robin (RR):
RR is a proactive scheduling algorithm that reserves a fixed amount of time in a round for each process. If a
process does not complete its execution within the specified time, it is blocked and added to the end of the
queue. RR ensures fair distribution of CPU time to all processes and avoids starvation.
4. Priority Scheduling:
This scheduling algorithm assigns priority to each process and the process with the highest
priority is executed first. Priority can be set based on process type, importance, or resource
requirements.
Process schedulers
In multiprogramming environment, the OS decides which process gets the processor when and for how much time. This
function is called process scheduling
It determines which programs are admitted to the system for processing. Job
scheduler selects processes from the queue and loads them into memory
for execution. Process loads into the memory for CPU scheduling.
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important that
the long-term scheduler make a careful selection of both I/O and CPU-bound processes.
I/O-bound tasks are which use much of their time in input and output operations while
CPU-bound processes are which spend their time on the CPU. The job scheduler increases
efficiency by maintaining a balance between the two. They operate at a high level and are
typically used in batch-processing systems.
Medium-term scheduling:
Medium term scheduling is in charge of swapping processes between the main
memory and the secondary storage.
It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it doesn’t
load the process on running. Here is when all the scheduling algorithms are used. The CPU
scheduler is responsible for ensuring no starvation due to high burst time processes.The
dispatcher is responsible for loading the process selected by the Short-term scheduler on
the CPU (Ready to Running State) Context switching is done by the dispatcher only. A
dispatcher does the following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
Short-term scheduling (low-level scheduling):
Determines which ready process will be assigned the CPU when it next becomes
available.
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be necessary
to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces the
degree of multiprogramming.
How an operating system manages the
resources
Logical and Physical Address Space
● Logical Address Space: An address generated by the CPU is known as a “Logical Address”. It
is also known as a Virtual address. Logical address space can be defined as the size of the
process. A logical address can be changed.
● Physical Address Space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses is
done by a hardware device Memory Management Unit(MMU). The physical address always
remains constant.
● The mapping between logical pages and physical page frames is maintained by the page table,
which is used by the memory management unit to translate logical addresses into physical
addresses. The page table maps each logical page number to a physical page frame number.
Memory management
Memory management is the functionality of an operating system which handles or manages primary
memory and moves processes back and forth between main memory and disk during execution.
In MMU scheme, the value in the relocation register is added to every address generated
by a user process at the time it is sent to memory. The user program deals with logical
addresses; it never sees the real physical addresses
Virtual memory is partitioned into equal size pages. Main memory is also partitions
into equal size page frames.
Size of a page = size of a page frame
Programs are also partitioned into pages at the time of loading.
VM MM
Page Frame
Device drivers
● A driver provides a software interface to hardware devices, enabling operating systems
and other computer programs to access hardware functions without knowing the precise
hardware details.
● Device drivers depends on both the hardware and the operating system loaded in to the
computer
Data is sent to and stored in main memory or other volatile It also aids in the reduction of idle time,
storage until it is requested for execution by a program or as well as overlapped I/O and CPU.
computer. Spooling makes use of the disc as a large buffer to Simple forms of file management are
send data to printers and other devices. It can also be used as frequently provided by batch systems.
an input, but it is more commonly used as an output. Its The access to the file is sequential. Batch
primary function is to prevent two users from printing on the systems do not necessitate the
same page at the same time, resulting in their output being management of time-critical devices.
completely mixed together. It prevents this because it uses the
FIFO(First In First Out) strategy to retrieve the stored jobs in
the spool, and that creates a synchronization preventing the
output to be completely mixed together.
Advantages Disadvantages
● The spooling operation makes use of a disc as a very large ● Depending on the volume of requests received
buffer. and the number of input devices connected,
● It enables applications to run at the CPU’s speed while I/O spooling needs a lot of storage.
devices operate at their full speed. ● Since the SPOOL is created in the secondary
● Spooling, on the other hand, is capable of overlapping I/O storage, having lots of input devices active at
operations for one job with processor operations for another. once may cause the secondary storage to fill
up quickly and increase disc traffic. As a result,
the disc becomes slower and slower as the
volume of traffic grows.
An operating system does the following activities related to distributed
environment:
● Handles I/O device data spooling as devices have different data access
rates.
● Maintains the spooling buffer which provides a waiting station where
data can rest while the slower device catches up.
● Maintains parallel computation because of spooling process as a
computer can perform I/O in parallel fashion. It becomes possible to
have the computer read data from a tape, write data to disk and to write
out to a tape printer while it is doing its computing task.
Buffering