0% found this document useful (0 votes)
34 views91 pages

Operating Systems

An operating system acts as an interface between the user and computer hardware. It allocates resources like memory, processors, and devices to programs and includes programs to manage these resources. Operating systems have evolved over time from simple batch systems with no direct user access, to multiprogrammed systems that allowed multiple programs to run simultaneously, to time-sharing systems that rapidly switched between programs to provide the illusion that many were running at once. Responsibilities of an operating system include controlling resources, coordinating program execution, and providing an interface to users.

Uploaded by

smi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views91 pages

Operating Systems

An operating system acts as an interface between the user and computer hardware. It allocates resources like memory, processors, and devices to programs and includes programs to manage these resources. Operating systems have evolved over time from simple batch systems with no direct user access, to multiprogrammed systems that allowed multiple programs to run simultaneously, to time-sharing systems that rapidly switched between programs to provide the illusion that many were running at once. Responsibilities of an operating system include controlling resources, coordinating program execution, and providing an interface to users.

Uploaded by

smi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

OPERATING SYSTEMS

What is an operating system?

An Operating System acts as a communication bridge (interface) between the user and computer hardware. The
purpose of an operating system is to provide a platform on which a user can execute programs conveniently and
efficiently.

The main task an operating system carries out is the allocation of resources and services, such as the allocation
of memory, devices, processors, and information. The operating system also includes programs to manage these
resources, such as a traffic controller, a scheduler, a memory management module, I/O programs, and a file
system. The operating system simply provides an environment within which other programs can do useful work.
Booting - an OS being loaded into main memory
History of OS
No OS(1940 - 1950)
● Had to manually type instructions
● No operating system
● User had to directly interact with the hardware
● Programs directly loaded into the computer
● Not user friendly
● Deep understanding of machine language was needed
● Machines run from console with display lights, toggle switches
● Uniprogramming
● Processor sat idle when loading programs and doing I/O
Simple batch system
● Maximized the processor utilization
● Programs recorded in a magnetic tape with an inexpensive
machine
● Os loaded and executed programs in tape one at a time
● Users had facility to write their programs on punch cards and
load it to the computer operator.
● When the current program ended execution, its output was written
to another tape and os loaded next program
● At the end of the entire batch programs, output tape was printed
with an inexpensive machine
● No direct access to hardware
● Uniprogramming
● High response time
● Processor sat idle during I/O
Multiprogrammed batch systems

● Central theme of os
● Introduced 3rd gen to minimize the processor idle time during I/O
● Memory is partitioned to hold multiple programmes
● When current program is waiting for I/O, OS switches processor to
execute another program in memory(which is first in ready queue)
● If memory is large enough to hold more programs, processor
could keep 100% busy.
Time sharing systems

● Extended ver of multiprogramming system


● Introduced to minimize the response time and maximize the user
interaction during program execution
● Uses context switching
● Enables to share the processor time among multiple programs
● Rapidly switching among programs, credits illusion of execution of
multiple programs
● Basically OS swithces from one program to another program after
a certain interval of time so that every program can get access of
CPU and complete their work.
Different types of OS

1. Based on the users


- Single user
- Multi user

2. Based on the number of tasks


- Single task
- Multi task
Responsibilities of an OS
● It controls all the computer resources.
● It provides valuable services to user programs.
● It coordinates the execution of user programs.
● It provides resources for user programs.
● It provides an interface (virtual machine) to the user.
● It hides the complexity of software.
● It supports multiple execution modes.
● It monitors the execution of user programs to prevent errors.
Functions of an OS

● Memory management
● File management
● Processor management
● Device management
● User interface or command interpreter
● Booting the computer
● Security
● Control over system performance
● Job accounting
● Error detecting
● Coordinating between other users and softwares
● Network management
SOFTWARE

System software Application software

Utility software
Anti virus softwares, device
Off the shelf Bespoke
manager, file managements
tools, file compression tools,
disk management tools
OS
Designed and built on demand w/ a
macOS, iOS,
Windows, Ubuntu,
Buy w/o customizing specific purpose in mind
Language translators Linux

compiler , interpreter
- Python interpreter

Open source s/w Proprietary s/w

Linux, mozilla firefox, Windows, microsoft,


Source code is available and it is free to
use, modify, or redistributed
How an operating system manages
directories/folders and files in computers.
FILES
File is a named collection of information, usually a sequence of
bytes, recorded in the secondary storage.

A file can be viewed in 2 different ways


1. Logical view (programmer’s view)
How users see the file. Liners collection of records.

2. Physical view(OS’s view)


How a file is stored in the secondary storage.
File attributes

Each file has an associated collection


of information
● File name
● type(eg: source, data, executable)
● Owner
● Location on the secondary storage
● Organization (sequential, indexed,
random)
● Access permission
● Time and date of creation, last
modification, last access
● File size
File types
Graphic Interchange Format

Tag Image File Format


Portable document format
File Structures

A File Structure is a format that the operating system can understand.

• A file has a certain defined structure according to its type.


• A text file is a sequence of characters organized into lines.
• An object file is a sequence of bytes organized into blocks that are
understandable by the machine.
• Source file is a series of functions and processes
Directory and file organization

A directory is a container that is used to contain folders and files. It organizes files
and folders in a hierarchical manner.

Tree structure/hierarchical
Single level directory Two level directory structure

Features:

● Naming problem: Users


cannot have the same name ● Path name: Due to two levels there is
a path name for every file to locate The directory is maintained in the form
for two files. of a tree. Searching is efficient and
that file.
● Grouping problem: Users ● Now, we can have the same file name also there is grouping capability. We
cannot group files according for different users. have absolute or relative path name
to their needs. ● Searching is efficient in this method. for a file
The advantages of maintaining directories

● Efficiency: A file can be located more quickly.

● Naming: It becomes convenient for users as two users


can have same name for different files or may have
different name for same file.

● Grouping: Logical grouping of files can be done by


properties e.g. all java programs, all games etc.
File systems
A file system is a method an operating system uses to store,
organize, and manage files and directories on a storage device.
The file system‘s job is to keep the files organized in the best way
possible.

1. FAT (File Allocation Table): An older file system used by older


versions of Windows and other operating systems. Introduced
w/ MS DOS(microsoft disk operating system). It keeps track of the
location of each file on the device by using a table that maps
file names to their physical location on the disk.FAT and the
root directory reside at a fixed location of the volume so that
the system's boot files can be correctly located.To protect a
volume, two copies of the FAT are kept.
Main versions of FAT
2. NTFS (New Technology File System): A modern file system used by
Windows. It supports features such as file and folder permissions,
compression, and encryption. It is a proprietary file system
developed by Microsoft. This is improvement of FAT

3. ext (Extended File System): A file system commonly used on Linux


and Unix-based operating systems.

4. HFS (Hierarchical File System): A file system used by macOS.

5. APFS (Apple File System): A new file system introduced by Apple


for their Macs and iOS devices
Improvements of FAT in NTFS

● Capability of recover from some disk related errors


automatically
● Support with unicode system
● Improved support for larger HDDs.
● Better security as permissions and encryptions are used to
restrict access to specific files to approved users.
Advantages of a file system
● Organization: A file system allows files to be organized into
directories and subdirectories, making it easier to manage
and locate files.

● Data protection: File systems often include features such as


file and folder permissions, backup and restore, and error
detection and correction, to protect data from loss or
corruption.

● Improved performance: A well-designed file system can


improve the performance of reading and writing data by
organizing it efficiently on disk.
Disadvantages of a file system
● Compatibility issues: Different file systems may not be
compatible with each other, making it difficult to transfer
data between different operating systems.

● Disk space overhead: File systems may use some disk space
to store metadata and other overhead information, reducing
the amount of space available for user data.

● Vulnerability: File systems can be vulnerable to data


corruption, malware, and other security threats, which can
compromise the stability and security of the system.
File Security

● Encryption
● Use of passwords
● authentication
● Access control
● Backup and recovery
● Network security
● Physical security
Authentication methods

● Passwords
● Biometrics (BEHAVIOURAL BIOMETRICS : keystroke, voice
recognition, signature recognition. PHYSIOLOGICAL BIOMETRICS :
eye retina pattern, facial recognition, fingerprint recognition)
● Two factor authentication
● Multi factor authentication
Disk Fragmentation
Fragmentation is the unintentional division of Disk into many small free areas that
cannot be used effectively due to scattered storage of file fragments.
Fragmentation of disk means allocating data in non-sequence form. Usually, data is stored in
hard drive in sequence form and data header is keep following incoming data so that it is easy
to read data efficiently but when we delete some older data from this sequence. Sequence of
data management is disturbed and data is looking like scatter form and also when we update
older data with a bigger size of data Operating System split entire data into small packets and
store data in different locations of storage area.
Fragmentation can also occur at various levels within a system. File fragmentation, for
example, can occur at the file system level, in which a file is divided into multiple
non-contiguous blocks and stored on a storage medium. Memory fragmentation can occur at
the memory management level, where the system allocates and deallocated memory blocks
dynamically. Network fragmentation occurs when a packet of data is divided into smaller
fragments for transmission over a network.
Disadvantages of fragmentation

1. Read and Write time of data in disk increase because the data header has sections to move in
different storage section of disk.
2. Efficiency and performance of disk decreases.
3. Usage of disk can be reached to 100 percent.
4. Acquire more storage section results decrease volume of disk.
5. Health of hard disk may be affected due to Fragmentation. - Slower performance
6. Slow boot-up time of Machine. - Disk space wasting
7. Errors in and conflict between applications. - Data loss
- Increased risk of system
crashes
- Reduced battery life
Types of fragmentation

1. Internal fragmentation

Internal fragmentation occurs when there is unused space within a


memory block. For example, if a system allocates a 64KB block of
memory to store a file that is only 40KB in size, that block will contain
24KB of internal fragmentation. When the system employs a fixed-size
block allocation method, such as a memory allocator with a fixed block
size, this can occur.

Block 1

Block 2
2. External fragmentation

External fragmentation occurs when a storage medium, such as a hard disc or solid-state
drive, has many small blocks of free space scattered throughout it. This can happen when
a system creates and deletes files frequently, leaving many small blocks of free space on
the medium. When a system needs to store a new file, it may be unable to find a single
contiguous block of free space large enough to store the file and must instead store the
file in multiple smaller blocks. This can cause external fragmentation and performance
problems when accessing the file.
Data Fragmentation
In data fragmentation, data is stored in non-sequential form

Disk Defragmentation

In this process, all scattered fragments (data) rearrange in such that


they come in sequence form with further utility program available in
Windows. In this proces, program first check percentage of the
fragment available in disk then Defragment all disk as possible.

Fragmented Defragmented
disk disk
Defragmentation

All scattered fragments (data) rearrange in such that they come in


sequence form with further utility program available in Windows. In
this process program first, check percentage of the fragment
available in disk then Defragment all disk as possible.

If you have mechanical hard drive, the disk


Defragmentation is a moves in circular motion and the head which
read and write the data, then
process that locates and defragmentation is very good and windows
eliminates file fragments do it automatically but if you have SSD, then
defragmentation is not a good concept.
by rearranging them. Doing defragmentation in SSD decreases its
speed that is why Windows do not
automatically defragment in SSD. There are
various methods which Windows
automatically apply on SSD for its junk files,
spaces etc to work properly.
File allocation methods

Space Allocation Files are allocated disk spaces by operating


system. Operating systems deploy following three main ways to
allocate disk space to files.
o Contiguous Allocation
o Linked Allocation
o Indexed Allocation
Contiguous allocation method
A single continuous set of blocks is allocated to a file at the time of file
creation. Thus, this is a pre-allocation strategy, using variable size portions.
The file allocation table needs just a single entry for each file, showing the
starting block and the length of the file. This method is best from the point of
view of the individual sequential file. Multiple blocks can be read in at a time
to improve I/O performance for sequential processing. It is also easy to
retrieve a single block. For example, if a file starts at block b, then the blocks
assigned to the file will be: b, b+1, b+2,……b+n-1.

Allocate disk space as a collection of


adjacent/contiguous blocks. This technique needs to
keep track of unused disk space.
Features of contiguous allocation

● External fragmentation will occur, making it difficult to find contiguous blocks of


space of sufficient length. A compaction algorithm will be necessary to free up
additional space on the disk.
● With pre-allocation, it is necessary to declare the size of the file at the time of
creation.
● Simple. Easy Access. File size is not known at the time of creation.
● Extending file size is difficult
Linked allocation
Allocation is on an individual block basis. Each
block contains a pointer to the next block in the
chain. Again the file table needs just a single entry
for each file, showing the starting block and the
length of the file. Although pre-allocation is
possible, it is more common simply to allocate
blocks as needed. Any free block can be added to
the chain. The blocks need not be continuous. An
increase in file size is always possible if a free disk
block is available. There is no external
fragmentation because only one block at a time is
needed but there can be internal fragmentation but
it exists only in the last disk block of the file.

inside each block a link is maintained to point to


where the next block of the file is.
Features of linked allocation

● No external fragmentation. Files can grow easily. Many seek are required to
access file data Example:
MSDOS FAT file system

● Internal fragmentation exists in the last disk block of the file.


● There is an overhead of maintaining the pointer in every disk block.
● If the pointer of any disk block is lost, the file will be truncated.
● It supports only the sequential access of files.Random Access is not provided.
● Pointers require some space in the disk blocks.
● Need to traverse each block.
● File ends at nil pointer
Indexed allocation

It addresses many of the problems of contiguous and


chained allocation. In this case, the file allocation table
contains a separate one-level index for each file: The index
has one entry for each block allocated to the file. The
allocation may be on the basis of fixed-size blocks or
variable-sized blocks. Allocation by blocks eliminates
external fragmentation, whereas allocation by
variable-size blocks improves locality. This allocation
technique supports both sequential and direct access to
the file and thus is the most popular form of file allocation.

Creates a table of pointers(index) at the time of the file


creation. This table is modified as new blocks are allocated for
the file or removed from the file. The index table is also saved
in a block/s.
Example : UNIX file system
Features of indexed allocation
● Supports direct access
● A bad data block causes the lost of only that block.
● No external fragmentation

Disadvantages
● A bad index block could cause the lost of entire file.
● Size of a file depends upon the number of pointers, a index block can hold.
● Having an index block for a small file is totally wastage.
For very small files, say files that expand only 2-3 blocks, the indexed
allocation would keep one entire block (index block) for the pointers
which is inefficient in terms of memory utilization. However, in linked
allocation we lose the space of only 1 pointer per block.
● More pointer overhead
Maintenance of secondary storage
Disk formatting
Formatting is the process of preparing a data storage device for initial use which may
also create one or more new file systems.

Disk formatting is a process to configure the data-storage devices such as hard-drive, floppy disk and
flash drive when we are going to use them for the very first time or we can say initial usage. Disk
formatting is usually required when new operating system is going to be used by the user. It is also done
when there is space issue and we require additional space for the storage of more data in the drives.
When we format the disk then the existing files within the disk is also erased. When we are going to use
hard-drive for initial use it is going to search for virus. It can scan for virus and repair the bad sectors
within the drive. Disk formatting has also the capability to erase the bad applications and various
sophisticated viruses. As we know that disk formatting deletes data and removes all the programs
installed with in the drive. So it can be done with caution. We must have the backup of all the data and
applications which we require. No-doubt disk formatting requires time. But the frequent formatting of the
disk decreases the life of the hard-drive.
As suggesting from the name, partitioning means
divisions. Partitioning is the process of dividing
the hard-disk into one or more regions. The
regions are called as partitions.

It can be performed by the users and it will affect


Low level formatting is a type of physical formatting. It is the process of the disk performance.
marking of cylinders and tracks of the blank hard-disk. After this there is the High-level formatting is the process of
division of tracks into sectors with the sector markers. Now-a-days Ex :C drice, E drive, F drive writing. Writing on a file system, cluster
low-level formatting is performed by the hard-disk manufacturers
themselves.
size, partition label, and so on for a
newly created partition or volume. It is
We have data in our hard-disk and when we perform low-level formatting in done to erase the hard-disk and again
the presence of data in the hard-disk all the data have been erased and it is installing the operating system on the
impossible to recover that data. Some users make such a format that they
can avoid their privacy leakage. Otherwise low-level will cause damage to disk-drive.
hard-disk shortens the service-life.

Therefore, this formatting is not suggested to users.


!!Recovering data from a formatted disk!!

● As file deletion is done by the operating system, data on a disk are not fully erased
during every high-level format. Instead, links to the files are deleted and the area on
the disk containing the data is retains until it is overwritten.

● It's possible to recover data from a formatted hard drive after a quick format.
However, it's not possible to recover data after a full format. To maximize the chances
of recovering data, avoid writing new files to the formatted drive. You can also use a
trustworthy data recovery program

● Recovering data from a formatted disk can be challenging, but it's not always
impossible. When a disk is formatted, the file system structures are generally deleted,
but the actual data may still be present on the disk until new data overwrites it

● The success of data recovery depends on various factors, including the extent of
overwriting, the type of formatting, and the actions taken after formatting. It's
essential to act quickly and cautiously to maximize the chances of successful data
recovery.
Professional Data Recovery Services:
steps you can take to attempt data recovery: If the data is critical and you are unable to recover it using
software, you may need to consider professional data recovery
Stop Using the Disk: services. These services have specialized equipment and
Immediately stop using the formatted disk to prevent expertise to recover data in more challenging situations.
further overwriting of data. Any new data written to the However, they can be expensive.
disk may overwrite the sectors where your old data is still Restore from Backup:
present.
If you have a backup of the data, this is the most
Use Data Recovery Software:
straightforward solution. Regular backups are essential for
There are various data recovery software tools available
that can help you recover data from a formatted disk. preventing data loss in various scenarios, including accidental
Examples include Recuva, EaseUS Data Recovery Wizard, formatting.
Disk Drill, and TestDisk. These tools scan the disk for Check for Hidden Partitions:
recoverable files and attempt to restore them. In some cases, formatting may create a new file system while
Bootable Recovery Tools: leaving the old data intact in a hidden partition. Check if there
Some data recovery tools offer bootable versions that you are any hidden partitions on the disk that may contain your
can run from a USB drive or CD/DVD. This allows you to data.
use the tool without accessing the formatted disk's
Avoid Writing to the Disk:
operating system, minimizing the risk of overwriting data.
Minimize any activity on the formatted disk. Do not install new
software, save files, or perform any other actions that could
write data to the disk.
Learn from the Experience:
Consider implementing a backup strategy to prevent future
data loss. Regularly back up your important data to an external
drive, cloud storage, or another secure location.
!!Compaction!!: The compaction process typically involves moving allocated memory blocks to fill gaps and create larger contiguous free
memory blocks.

Compaction is a process in which the free space is collected in a large memory chunk to make
some space available for processes. In memory management, swapping creates multiple
fragments in the memory because of the processes moving in and out. Compaction refers to
combining all the empty spaces together and processes.

Compaction in Memory Management:

In computer science, particularly in the context of memory management, compaction


refers to the process of rearranging the memory contents to place all free memory
together in one block. This helps prevent memory fragmentation, making it easier to
allocate large contiguous blocks of memory.

File System Compaction:

In file systems, compaction can be a process of rearranging or optimizing the allocation


of files on a storage device. This can involve defragmentation, where fragmented files are
reorganized to improve access times and overall performance.
How an operating system manages processes
in computers
What is a process?

A process is a program in execution.


Ex: when we write a program in C or C++ and compile it, the compiler creates binary code. The original
code and binary code are both programs. When we actually run the binary code, it becomes a process.

Process is not a program. A program may have many processes.

Process management can help organizations improve their operational efficiency, reduce costs, increase
customer satisfaction, and maintain compliance with regulatory requirements. It involves analyzing the
performance of existing processes, identifying bottlenecks, and making changes to optimize the process
flow.
Process management includes various tools and techniques such as process mapping, process analysis,
process improvement, process automation, and process control. By applying these tools and techniques,
organizations can streamline their processes, eliminate waste, and improve productivity.
Types of processes

I/O bound processes processor bound processes


I/O-bound process requires more I/O time and less CPU-bound process requires more CPU time or
CPU time. An I/O-bound process spends more time in spends more time in the running state.
the waiting state.

Process planning is an integral part of the process management operating system. It


refers to the mechanism used by the operating system to determine which process to run
next. The goal of process scheduling is to improve overall system performance by
maximizing CPU utilization, minimizing execution time, and improving system response
time.
Process termination
Process creation Reasons for process termination:
Reasons for process creation:
- Normal termination
-New batch job - Execution time is exceeded
-User starts a program - Resource requested is unavailable
-OS creates process to provide a service - An execution error
-Running program starts another process - A memory access violation
- An OS or parent process has
terminated
- Parent process has terminated.

Interrupts Interrupt Handling


Generally I/O models are
Interrupt is an event that alters the sequence of execution of process. slower than CPU. After each
- Interrupt can occur due to a time expiry an OS service request I/O I/O call, CPU has to sit idle
until I/O device complete the
completion. For example when a disk driver has finished transferring the operation, and so processor
requested data, it generates an interrupt to the OS to inform the OS that saves the status of current
process and executes some
the task is over. other process. When I/O
-Interrupts occur asynchronously to the ongoing activity of the operation is over, I/O devices
processor. issue an
interrupt to CPU then stores
Thus the times at which interrupts occur are unpredictable. the original process and
reserves execution.
Process Management

If the operating system supports multiple users then services under this are very important. In this regard, operating systems
have to keep track of all the completed processes, Schedule them, and dispatch them one after another. But the user should
feel that he has full control of the CPU.

Some of the systems call in this category are as follows.

1. Create a child’s process identical to the parent’s. Operating System does the
2. Terminate a process following activities for processor
managements:
3. Wait for a child process to terminate
4. Change the priority of the process • Keeps tracks of processor and status of
process. The program responsible for this task
5. Block the process is known as traffic controller.
6. Ready the process • Allocates the processor (CPU) to a process.
7. Dispatch a process
• De-allocates processor when a process is no
8. Suspend a process longer required

9. Resume a process
10. Delay a process
11. Fork a process
Attributes or Characteristics of a Process

attributes of a process are also known as the context of the process. Every process has its own process
control block(PCB), i.e. each process will have a unique PCB. All of the above attributes are part of the
PCB. A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID).

1. Process Id: A unique identifier assigned by the operating system


2. Process State: Can be ready, running, etc.
3. CPU registers: Like the Program Counter (CPU registers must be saved and restored when a
process is swapped in and out of the CPU)
4. Accounts information: Amount of CPU used for process execution, time limits, execution ID, etc
5. I/O status information: For example, devices allocated to the process, open files, etc
6. CPU scheduling information: For example, Priority (Different processes may have different
priorities, for example, a shorter process assigned high priority in the shortest job first scheduling)
What is a PCB?
A process control block (PCB) is a data structure used by operating systems to store
important information about running processes. It contains information such as the unique
identifier of the process (Process ID or PID), current status, program counter, CPU registers,
memory allocation, open file descriptions and accounting information. The circuit is critical to
context switching because it allows the operating system to efficiently manage and control
multiple processes.

Context switching is the process of saving the current state of a running process and loading the state of
another process so that the CPU can switch its execution from one process to another. The process control
block (PCB) plays a key role in context switching because it contains all relevant information about the
process. When the operating system decides to switch to another process, it stores the current process in the
circuit’s memory, including CPU registers and program counters. It then loads the chip to start the next
process, resets its state and resumes execution from where it left off. This seamless switching between
processes allows the operating system to create the illusion of simultaneous execution, even though the
processor can only run one process at a time.
Additional Points to Consider for Process Control Block (PCB)

● Interrupt handling: The PCB also contains information about the interrupts that a process may
have generated and how they were handled by the operating system.
● Context switching: The process of switching from one process to another is called context
switching. The PCB plays a crucial role in context switching by saving the state of the current
process and restoring the state of the next process.
● Real-time systems: Real-time operating systems may require additional information in the PCB,
such as deadlines and priorities, to ensure that time-critical processes are executed in a timely
manner.
● Virtual memory management: The PCB may contain information about a process’s virtual
memory management, such as page tables and page fault handling.
● Inter-process communication: The PCB can be used to facilitate inter-process communication by
storing information about shared resources and communication channels between processes.
● Fault tolerance: Some operating systems may use multiple copies of the PCB to provide fault
tolerance in case of hardware failures or software errors.
Context Switching:
The process of saving the context of one process and loading the context of another
process is known as Context Switching. In simple terms, it is like loading and unloading
the process from the running state to the ready state.

When Does Context Switching Happen?

1. When a high-priority process comes to a ready state (i.e. with higher priority than the
running process)
2. An Interrupt occurs
3. User and kernel-mode switch (It is not necessary though)
4. Preemptive CPU scheduling is used.
A context switch is the mechanism to store and restore the state or context
of a CPU in Process Control block so that a process execution can be
resumed from the same point at a later time.

- Using this technique a context switcher enables multiple processes to


share a single CPU. Context switching is an essential part of a
multitasking operating system features.
- When the scheduler switches the CPU from executing one process to
execute another, the context switcher saves the content of all processor
registers for the process being removed from the CPU, in its process
control block.
- Context switch time is pure overhead. Context switching can
significantly affect performance as modern computers have a
of general and status registers to be saved.
States of Process

A process is in one of the following states:


1. New: Newly Created Process (or) being-created process.
2. Ready: After the creation process moves to the Ready state, i.e. the process is ready for execution.
3. Run: Currently running process in CPU (only one process at a time can be under execution in a
single processor)
4. Wait (or Block): When a process requests I/O access.
5. Complete (or Terminated): The process completed its execution.
6. Suspended Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state
7. Suspended Block: When the waiting queue becomes full.
7 state process model
Scheduling Algorithms

The operating system can use different scheduling algorithms to schedule processes.

Preemptive and non-preemptive scheduling are two


types of CPU scheduling techniques(scheduling
policies)

In preemptive scheduling, the CPU is allocated to a


process for a limited time. A process can be
interrupted while it's running, and the CPU can be
given to another process. Tasks are switched based
on priority.
Ex: SRTF, RR, priority scheduling

In non-preemptive scheduling, the CPU is allocated


to a process until it terminates. A process only
relinquishes control of the CPU when it finishes its
current CPU burst. No switching takes place.
Ex: SJF, FCFS
1. First-come, first-served (FCFS):

This is the simplest scheduling algorithm, where the process is executed on a first-come, first-served basis. FCFS
is non-preemptive, which means that once a process starts executing, it continues until it is finished or waiting
for I/O.
2. Shortest Job First (SJF):

SJF is a proactive scheduling algorithm that selects the process with the shortest burst time. The burst time is
the time a process takes to complete its execution. SJF minimizes the average waiting time of processes.
3. Round Robin (RR):

RR is a proactive scheduling algorithm that reserves a fixed amount of time in a round for each process. If a
process does not complete its execution within the specified time, it is blocked and added to the end of the
queue. RR ensures fair distribution of CPU time to all processes and avoids starvation.
4. Priority Scheduling:

This scheduling algorithm assigns priority to each process and the process with the highest
priority is executed first. Priority can be set based on process type, importance, or resource
requirements.
Process schedulers
In multiprogramming environment, the OS decides which process gets the processor when and for how much time. This
function is called process scheduling

Long-term scheduling (Job scheduling):

It determines which programs are admitted to the system for processing. Job
scheduler selects processes from the queue and loads them into memory
for execution. Process loads into the memory for CPU scheduling.

It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important that
the long-term scheduler make a careful selection of both I/O and CPU-bound processes.
I/O-bound tasks are which use much of their time in input and output operations while
CPU-bound processes are which spend their time on the CPU. The job scheduler increases
efficiency by maintaining a balance between the two. They operate at a high level and are
typically used in batch-processing systems.
Medium-term scheduling:
Medium term scheduling is in charge of swapping processes between the main
memory and the secondary storage.

It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it doesn’t
load the process on running. Here is when all the scheduling algorithms are used. The CPU
scheduler is responsible for ensuring no starvation due to high burst time processes.The
dispatcher is responsible for loading the process selected by the Short-term scheduler on
the CPU (Ready to Running State) Context switching is done by the dispatcher only. A
dispatcher does the following:
1. Switching context.
2. Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
Short-term scheduling (low-level scheduling):

Determines which ready process will be assigned the CPU when it next becomes
available.

It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be necessary
to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces the
degree of multiprogramming.
How an operating system manages the
resources
Logical and Physical Address Space

● Logical Address Space: An address generated by the CPU is known as a “Logical Address”. It
is also known as a Virtual address. Logical address space can be defined as the size of the
process. A logical address can be changed.
● Physical Address Space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses is
done by a hardware device Memory Management Unit(MMU). The physical address always
remains constant.
● The mapping between logical pages and physical page frames is maintained by the page table,
which is used by the memory management unit to translate logical addresses into physical
addresses. The page table maps each logical page number to a physical page frame number.
Memory management
Memory management is the functionality of an operating system which handles or manages primary
memory and moves processes back and forth between main memory and disk during execution.

Why Memory Management is Operating System does the


Required? following activities for memory
● Allocate and de-allocate memory before management:
and after process execution. ● Keeps tracks of primary memory, i.e., what part
of it are in use by whom, what part are not in
● To keep track of used memory space by use.
processes. ● In multiprogramming, the OS decides which
process will get memory when and how much.
● To minimize fragmentation issues. ● Allocates the memory when a process requests
it to do so
● To proper utilization of main memory.
● De-allocates the memory when a process no
● To maintain data integrity while executing longer needs it or has been terminated.
of process.
MMU - memory management unit
(MMU) is a computer hardware component that handles memory and caching operations. It's
responsible for translating virtual memory addresses to physical addresses in the memory system.
This allows multiple processes to run simultaneously without interfering with each other.

In MMU scheme, the value in the relocation register is added to every address generated
by a user process at the time it is sent to memory. The user program deals with logical
addresses; it never sees the real physical addresses

- The value in the base register is added to every address


generated by a user process, which is treated as offset at the
time it is sent to memory. For example, if the base register
value is 10000, then an attempt by the user to use address
location 100 will be dynamically re-allocated to location
10100.
- The user program deals with virtual addresses; it never sees
the real physical
addresses.
Paging
● Paging is a memory management scheme that eliminates the need for a contiguous allocation of
physical memory. The process of retrieving processes in the form of pages from the secondary
storage into the main memory is known as paging.
● The basic purpose of paging is to separate each procedure into pages. Additionally, frames will be
used to split the main memory. This scheme permits the physical address space of a process to be
non – contiguous.

In paging, the physical memory is


Virtual memory Page table Physical memory
divided into fixed-size blocks called
page 0 Page Fram frames → Frame 0 page frames, which are the same
abcd no e no monster size as the pages used by the
process. The process’s logical
page 1
0 2 Frame 1 address space is also divided into
page→ fixed-size blocks called pages,
1 which are the same size as the page
Frame 2
page 2 frames. When a process requests
abcd memory, the operating system
2
allocates one or more page frames
Frame 3
page 3
to the process and maps the
monster
3 0 process’s logical pages to the
Frame 4 physical page frames.
page 4
4
Remember.
● Physical Address Space = Size of main memory
● size of main memory = 2^number of bits in physical address
● Size of main memory = Total number of frames x Page size
● Page size = 2^number of bits in page offset
● number of frames in main memory = 2^number of bits in frame number
● Frame size = Page size
● if the given address consists of „n‟ bits, then using „n‟ bits, 2^n locations are
possible
● size of memory = 2^n x Size of one location
● If the memory is byte-addressable, then size of one location = 1 byte. Thus, size
of memory = 2^n bytes
● If the memory is word-addressable where 1 word = m bytes, then size of one
location = m bytes. Thus, size of memory = 2n x m bytes
Mapping
- The operating system takes care of mapping the logical addresses to physical addresses at the time
of memory allocation to the program.
- The runtime mapping from virtual to physical address is done by the memory management
unit (MMU) which is a hardware device.
- A Page Table is a data structure used by the operating system to keep track of the mapping between virtual
addresses used by a process and the corresponding physical addresses in the system’s memory.
Virtual memory
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as though it were part of the main memory.

Virtual memory is partitioned into equal size pages. Main memory is also partitions
into equal size page frames.
Size of a page = size of a page frame
Programs are also partitioned into pages at the time of loading.

VM MM
Page Frame

Size =16KB Size =16KB


Purpose of Virtual memory

● it allows us to extend the use of physical memory by using disk.


● it allows us to have memory protection, because each virtual address is
translated to a physical address.
● Allow applications larger than physical memory to execute.
● Run partially loaded programs
● Entire program need not to be in memory all the time.
● Degree of Multiprogramming: Many programs simultaneously reside in
memory.
● Application Portability:
- Applications should not have to manage memory resources
- Program should not depend on memory architecture.
● Permit sharing of memory segments or regions.
For example, read-only code segments should be shared between program
instances.
Input output device management

Device drivers
● A driver provides a software interface to hardware devices, enabling operating systems
and other computer programs to access hardware functions without knowing the precise
hardware details.
● Device drivers depends on both the hardware and the operating system loaded in to the
computer

Features of Device Management in Operating System


● Operating system is responsible in managing device communication through their
respective drivers.
● Operating system keeps track of all devices by using a program known as an input
output controller.
● It decides which process to assign to CPU and for how long.
● O.S. is responsible in fulfilling the request of devices to access the process.
● It connects the devices to various programs in efficient way without error.
● Deallocate devices when they are not in use.
Spooling
● Spooling is an acronym for Simultaneous Peripheral Operations On Line.
● Spooling refers to putting data of various I/O jobs in a buffer.
● This buffer is a special area in memory or hard disk which is accessible to
I/O devices.

Data is sent to and stored in main memory or other volatile It also aids in the reduction of idle time,
storage until it is requested for execution by a program or as well as overlapped I/O and CPU.
computer. Spooling makes use of the disc as a large buffer to Simple forms of file management are
send data to printers and other devices. It can also be used as frequently provided by batch systems.
an input, but it is more commonly used as an output. Its The access to the file is sequential. Batch
primary function is to prevent two users from printing on the systems do not necessitate the
same page at the same time, resulting in their output being management of time-critical devices.
completely mixed together. It prevents this because it uses the
FIFO(First In First Out) strategy to retrieve the stored jobs in
the spool, and that creates a synchronization preventing the
output to be completely mixed together.
Advantages Disadvantages
● The spooling operation makes use of a disc as a very large ● Depending on the volume of requests received
buffer. and the number of input devices connected,
● It enables applications to run at the CPU’s speed while I/O spooling needs a lot of storage.
devices operate at their full speed. ● Since the SPOOL is created in the secondary
● Spooling, on the other hand, is capable of overlapping I/O storage, having lots of input devices active at
operations for one job with processor operations for another. once may cause the secondary storage to fill
up quickly and increase disc traffic. As a result,
the disc becomes slower and slower as the
volume of traffic grows.
An operating system does the following activities related to distributed
environment:

● Handles I/O device data spooling as devices have different data access
rates.
● Maintains the spooling buffer which provides a waiting station where
data can rest while the slower device catches up.
● Maintains parallel computation because of spooling process as a
computer can perform I/O in parallel fashion. It becomes possible to
have the computer read data from a tape, write data to disk and to write
out to a tape printer while it is doing its computing task.
Buffering

● Buffering is a process in which the data is stored in a buffer or cache, which


makes this stored data more accessible than the original source.
● Buffer is an area in memory that is used to hold the data that is being transmitted
from one place to another and store the data temporarily.
Function of Buffering in OS
● Synchronization: This process increases the synchronization of different devices that are connected, so the
system’s performance also improves.
● Smoothening: The input and output devices have different operating speeds and Buffer data block sizes, this
process encapsulates the difference and ensures a smooth operation.
● Efficient Usage: Using this processing technique the system overhead and the inefficient usage of the system
resources.
Swapping

You might also like