OperatingSystemII Note
OperatingSystemII Note
LECTURE NOTES
ON
OPERATING SYSTEM II
(COM321)
BY:
MR ABUBAKAR BASHIR USMAN
INTRODUCTION
A computer system Operation can be divided roughly into four components: the hardware,
the operating system, the application programs and the user.
An operating system is software which acts as an interface between the end user and
computer hardware. Every computer must have at least one OS to run other programs.
Applications like Browsers, Word Processors, Spreadsheets, CorelDraw, Games, etc needs
some environment in which they will run and perform their task.The OS helps you to
communicate with the computer without knowing how to speak the computer's language. It
is not possible for the user to use any computer or mobile device without having an operating
system.
What is a Kernel?
The kernel is the central component of computer operating systems. The only job performed
by the kernel is to the manage the communication between the software and the hardware. A
Kernel is at the nucleus of operating system. It makes the communication between the
hardware and software possible.
4. Device Management: Device management keeps tracks of all devices. This module
also responsible for this task is known as the I/O controller. It also performs the task
of allocation and de-allocation of the devices.
7. Security: - Security module protects the data and information of a computer system
against malware threat and authorized access.
10. Job accounting: Keeping track of time & resource used by various job and users.
End of Introduction
MEMORY MANAGEMENT
The term Memory can be defined as a part of computer in which data or program
instructions can be stored for retrieval. The memory comprises a large array or group of
words or bytes, each with its own location. The primary motive of a computer system is to
execute programs. These programs, along with the information they access, should be in the
main memory during execution. The CPU fetches instructions from memory according to
the value of the program counter. To achieve a degree of multiprogramming and proper
utilization of memory, memory management is important. Many memory management
methods exist, reflecting various approaches, and the effectiveness of each algorithm
depends on the situation.
Memory is the most essential element of a computing system because without it computer
can’t perform simple tasks. Computer memory is two types:-
1. Primary Memory (RAM & ROM)
2. Secondary Memory (hard drive, CD, etc.)
Random access memory (RAM) is a primary-volatile memory and Read only memory
(ROM) is a primary-non-volatile memory.
S/N RAM ROM
1 Temporary storage Permanent storage
2 Store data in MBs Store data in GBs
3 Volatile Non-volatile
4 Used in normal operations Used for start-up process of computer
5 Writing data is faster Writing data is slower
Static loading: - loading the entire program into a fixed address. It requires more
memory space.
Example: In the case of the music player application, if it is statically loaded, all the
necessary components, such as the core music player functionality and any associated
libraries, would be loaded into memory when you double-click the program icon. This
includes everything the music player needs to operate, such as the user interface
elements, audio playback algorithms, and other relevant resources.
Dynamic loading: - The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called. All routines are residing on
disk in a re-locatable load format. One of the advantages of dynamic loading is that
unused routine is never loaded. This loading is useful when a large amount of code is
needed to handle it efficiently.
Example: In the context of the music player application, if it utilizes dynamic loading,
the loader would only load the essential components required to display the user
interface and handle basic operations when you double-click the program icon. For
example, it might load the main interface and basic playback functionality. As you
interact with the application and initiate specific actions, such as accessing a playlist or
applying audio effects, the loader would dynamically load the necessary components
into memory to fulfill those requests.
Static loading is used when you want to load your program In a Dynamically loaded program,
statically. Then at the time of compilation, the entire program references will be provided and the
will be linked and compiled without need of any external loading will be done at the time of
module or program dependency. execution.
Static linking is used to combine all other When dynamic linking is used, it does not need to
modules, which are required by a program link the actual module or library with the program.
into a single executable code. This helps OS Instead of it use a reference to the dynamic module
prevent any runtime dependency. provided at the time of compilation and linking.
Advantages of Swapping
1. Allows to make more efficient use of your available memory
2. Data will be swapped to hard disk drive if the computer runs out of RAM
otherwise, the computer will crash
3. Simple to implement
Disadvantages of Swapping
1. it leads to wastage of Memory which is called Fragmentation
2. Swapping processes from hard disk drive to Main Memory (RAM) leads to
wastage of CPU time (wastage of time)
Fragmentation
A Fragmentation is defined as when the process is loaded and removed after execution
from memory, it creates a small free hole. These holes cannot be assigned to new processes
because holes are not combined or do not fulfil the memory requirement of the process. To
achieve a degree of multiprogramming, we must reduce the waste of memory or
fragmentation problem. It is generally termed as inability to use available memory space this
problem is known as Fragmentation.
Advantages of Segmentation
1. No internal fragmentation
2. Segment Table consumes less space in comparison to page table in paging
Disadvantages of Segmentation
1. As processes are loaded and removed from the memory, the free memory space is
broken into little pieces, causing external fragmentation
4. Fixed Partitioning
Fixed partitioning: In fixed-sized memory partitioning, the main memory is
divided into blocks of the same or different sizes. Fixed-size memory partitioning can take
place before executing any processes or during the configuration of the system.
The earliest and one of the simplest technique which can be used to load more than one
processes into the main memory, In this technique, the main memory is divided into
partitions of equal or different sizes. The operating system always resides in the first partition
while the other partitions can be used to store user processes. The memory is assigned to the
processes in contiguous way.
Advantage of fixed partitioning
1. Simple to implement
VIRTUAL MEMORY
Virtual Memory is a storage allocation scheme in which secondary memory can be addressed
as though it were part of main memory. The addresses a program may use to reference
memory are distinguished from the addresses the memory system uses to identify physical
storage sites, and program generated addresses are translated automatically to the
corresponding machine addresses.
The size of virtual storage is limited by the addressing scheme of the computer system and
amount of secondary memory is available not by the actual number of the main storage
locations.
It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in computer
memory.
1. All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time. This means that a process can be
swapped in and out of main memory such that it occupies different places in main
memory at different times during the course of execution.
2. A process may be broken into number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of
dynamic run-time address translation and use of page or segment table permits this.
How Virtual Memory Is Implemented
Virtual Memory is a memory management technique that is implemented using both
hardware (MMU) and software (Operating System). The goal of virtual memory is to map
virtual memory addresses generated by an executing program into physical addresses in
computer memory. This concerns two main aspects; address translation (from virtual to
physical) and virtual address spaces management. The Virtual memory is generated on CPU
chip by Memory Management Unit or MMU.
1. Swapping
2. paging
Demand paging
Demand paging is a type of swapping done in virtual memory systems. In demand paging,
the data is not copied from the disk to the RAM until they are needed or being demanded by
some program. The data will not be copied when the data is already available on the memory.
This is otherwise called a lazy evaluation because only the demanded pages of memory are
being swapped from the secondary storage (disk space) to the main memory. In contrast
during pure swapping, all the memory for a process is swapped from secondary storage to
main memory during the process start up. The process of loading the page into main memory
(RAM) on demand is known as demand paging.
A demand paging system is quite similar to a paging system with swapping where processes
reside in secondary memory and pages are loaded only on demand, not in advance. When a
context switch occurs, the operating system does not copy any of the old program’s pages out
to the disk or any of the new program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that program’s pages as
they are referenced. While executing a program, if the program references a page which is
not available in the main memory because it was swapped out a little ago, the processor treats
this invalid memory reference as a page fault and transfers control from the program to the
operating system to demand the page back into the memory.
Page Replacement
Page replacement is a technique which an Operating System decides which memory pages to
swap out, write to disk when a page of memory needs to be allocated. Paging happens
whenever a page fault occurs and a free page cannot be used for allocation purpose
accounting to reason that pages are not available or the number of free pages is lower than
required pages.
When the page that was selected for replacement and was paged out, is referenced again, it
has to read in from disk, and this requires for I/O completion. This process determines the
quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better
is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize the
total number of page misses, while balancing it with the costs of primary storage and
processor time of the algorithm itself. There are many different page replacement algorithms.
We evaluate an algorithm by running it on a particular string of memory reference and
computing the number of page faults.
2. Update
File updating Changing values in one or more records of a file, especially a data file.
3. Delete
Delete is computer terminology for remove or erase. You can delete text from a document of
delete entire files or folders from your hard drive. When typing a document, you can remove
characters behind the cursor by pressing the delete key. If you want to remove characters in
front of the cursor, you can press the smaller delete key near the home and end buttons on the
keyboard. You can also remove entire sections of text by selecting the text you wish to delete
and pressing either delete button on the keyboard. Files and folders can be removed from
your hard drive by dragging them to the Recycle Bin (Windows) or the Trash (Macintosh)
and then emptying the trash. When you delete a file, it is actually not erased, but instead the
reference to the file is removed. This means deleted files are still intact until they are written
over. Special utilities such as Norton Unerase can recover accidentally deleted files.
4. Read
Ability to fetch the file from secondary storage in order to view the it (file).
File Directory
Collection of files in one place is a file directory. The directory contains information about
the files, including attributes, location and ownership. Much of this information, especially
that is concerned with storage, is managed by the operating system. The directory is itself a
file, accessible by various file management routines. Information contained in a device
directory for files are:
1. Name
2. Type
3. Address
4. Current length
5. Maximum length
6. Date last accessed
7. Date last updated
8. Protection information
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate
queue for each of the process states and PCBs of all processes in the same execution state are
placed in the same queue. When the state of a process is changed, its PCB is unlinked from
its current queue and moved to its new state queue. The Operating System maintains the
following important process scheduling queues −
1. Job queue − this queue keeps all the processes in the system.
2. Ready queue − this queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
3. Waiting/ Device queue – when the process needs some I/O operation in order to
complete its execution, OS changes the state of the process from running to waiting
1. Preemptive Scheduling
2. Non-Preemptive Scheduling
1. Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the
lower priority task is still running. The lower priority task holds for some time and resumes
when the higher priority task finishes its execution. In preemptive scheduling process can be
interrupted, even before the completion.
2. Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The
process that keeps the CPU busy will release the CPU either by switching context or
terminating. It is the only method that can be used for various hardware platforms. That’s
because it doesn’t need special hardware (for example, a timer) like preemptive scheduling.
In non-preemptive process is not interrupted until its life cycle is complete.
Schedulers
Schedulers are special system software which handles process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which
process to run. There are three types of schedulers available:
The OS can use different policies to manage each queue (FIFO, Round Robin, SJN etc.). The
OS scheduler determines how to move processes between the ready and run queues which
can only have one entry per processor core on the system.
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around time and burst time.
1. First Come First Serve (FCFS): Simplest scheduling algorithm that schedules
according to arrival times of processes. First come first serve scheduling algorithm
states that the process that requests the CPU first is allocated the CPU first. It is
implemented by using the FIFO queue. When a process enters the ready queue, its
PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the
process at the head of the queue. The running process is then removed from the
queue. FCFS is a non-preemptive scheduling algorithm.
Gantt chart:
P1 P2 P5 P3 P4
0 5 11 20 27 35
Completion time Turnaround time (CT-AT) Waiting time(TAT-BT)
5 5 0
11 10 4
20 19 10
27 25 18
35 32 24
2. Shortest Job First (SJF): Process which has the shortest burst time are scheduled first.
If two processes have the same bust time then FCFS is used to break the tie. It is a pre-
emptive and non-pre-emptive scheduling algorithm.
Gantt chart:
P1 P2 P3 P4 P5
0 5 11 18 26 35
Completion time Turnaround time (CT-AT) Waiting time(TAT-BT)
5 5 0
11 10 4
18 16 9
26 23 15
35 34 25
Deadlock
A process in operating systems uses different resources and uses resources in following way.
1) Requests a resource
2) Use the resource
2) Releases the resource
Deadlock is a situation where a set of processes are blocked because each process is holding
a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on same track and there
is only one track, none of the trains can move once they are in front of each other. Similar
situation occurs in operating systems when there are two or more processes hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process
1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.
Deadlock can arise if following four conditions hold simultaneously (Necessary
Conditions)
1. Mutual Exclusion: One or more than one resource is non-sharable (Only one process can
use at a time). There should be a resource that can only be held by one process at a time. In
the diagram below, there is a single instance of Resource 1 and it is held by Process 1 only.
2. Hold and Wait: A process is holding at least one resource and waiting for resources. A
process can hold multiple resources and still request more resources from other processes
which are holding them. In the diagram given below, Process 2 holds Resource 2 and
Resource 3 and is requesting the Resource 1 which is held by Process 1.
3. No Preemption: A resource cannot be taken from a process unless the process releases the
resource. A resource cannot be preempted from a process by force. A process can only
release a resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1
from Process 1. It will only be released when Process 1 relinquishes it voluntarily after its
execution is complete.
4. Circular Wait: A set of processes are waiting for each other in circular form. A process is
waiting for the resource held by the second process, which is waiting for the resource held by
the third process and so on, till the last process is waiting for a resource held by the first
process. This forms a circular chain. For example: Process 1 is allocated Resource2 and it is
requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop.
2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.
3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.
Deadlock Detection
Deadlock detection, we can run an algorithm to check for cycle in the Resource Allocation
Graph. Presence of cycle in the graph is the sufficient condition for deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle
R1 → P1 → R2 → P2. So, Deadlock is confirmed.