Lesson 05 -Operating Systems
Lesson 05 -Operating Systems
An operating system is the most important software that runs on a computer. It manages
the computer's memory and processes, as well as all of its software and hardware. It also
allows you to communicate with the computer without knowing how to speak the
computer's language. Without an operating system, a computer is useless.
Competency 5
Competency Level 5.1: Defines the term computer operating system (OS) and investigates its
A Computer consists of hardware, firmware and software. Any physical component of a computer
system with a definite shape is called a hardware. Examples of hardware include: mouse, keyboard,
display unit, hard disk, speaker, printer etc. The booting instructions stored in the ROM (Read Only
Memory) are called firmware. The initial text information displayed on the screen are displayed by
firmware.
1. When the user powers up the computer the CPU (Central Processing Unit) activates the
BIOS (Basic Input Output System).
2. The first program activated is POST (Power On Self-Test). Using the CMOS (Complementary
Metal Oxide Semiconductor) memory this checks all the hardware and confirms that all are
functioning properly.
3. After that it reads the MBR (Master Boot Record) in boot drive in accordance with the
Access Memory)
5. Once this is performed the Operating System takes over the control of the computer and
displays a user interface to the user.
This whole process is called booting which means that an Operating System is loaded into the
RAM (main memory).
Software is a set of instructions given to the computer to perform some activity using a computer.
There are many types of software. They can be broadly classified as follows:
1. System Software : System software are generally divided into three types. They are:
a. Operating System – The Operating System provides for the user to utilize the functions of
a computer by managing the hardware and software in it. The image below depicts how
the system software and application software interact with the hardware.
b. Utility Software – These are used to manage and analyze the software in the computer.
The utility software differ from the application software in their complexity and operational
activities. Utility software helps in managing the resources of the computer. However, the
application software function in different to the utility software. There are many utility
software which dedicated to perform certain functions. Some of them are mentioned below:
Disk Formatting – to prepare the storage device in order to save the files and folders
Disk defragmenters - detect computer files whose contents are scattered across
several locations on the hard disk and collect the fragments into one contiguous
area.
Disk cleaners - find files that are unnecessary to computer operation, or take up
considerable amounts of space.
to the human languages. These high level languages are translated into machine language
(i.e 0‟s and 1‟s) which are understood by the computer by language translators. assembler,
when detected.
Assembler - translate assembly language into machine language.
2. Application Software
The application software which runs on the Operating System is used to carry out computer based
activities of the user such as creating documents, mathematical functions, data entry and computer
games.
The software which facilitate the interaction between human user and hardware is the Operating
System. The Operating System provides instructions for installation and management of various
application software. Not only that the Operating System manages all the input, output and
computer memory too, which means that Operating System is the sole software which manages
the whole computer system. It provides a virtual machine(hides hardware details, provides an
interface to applications and end users), manages computing resources (keeps track of resource
No operating system.
Programs loaded directly into computer
machine
Features :
Introduce in 3rd generation to minimize the processor idle time during I/O
Memory is partitioned to hold multiple programs
When current program waiting for I/O, OS switches processor to execute another
program in memory
If memory is large enough to hold more programs, processor could keep 100% busy
Introduced to minimize the response time and maximize the user interaction during
program execution
Uses context switching
programs
Process Management
Resource Management ,(Memory, I/O device, Storage)
User Interfacing
Security Protection
Multi user-Multi task – A multi-user operating system has been designed for more than
one user to access the computer at the same or different time.
Multi-threading – A thread is also called a sub process. Threads provide a way to improve
application performance through the parallel execution of sub process.
Real Time – OS is designed to run applications with very precise timing and with a high
degree of reliability. The main objective of real-time operating systems is their quick and
predictable response to events. These types of OS are needed in situations where downtime
is costly or a program delay could cause a safety hazard. Examples of Real-Time
Operating Systems are: Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
Competency Level 5.2: Explores how an operating system manages directories/folders and
files in computers.
File Attributes : file name , type (e.g., source, data, executable) , Owner , location(s) on the secondary
storage. , organization (e.g. sequential, indexed, random) , access permissions – who is permitted to
read/write/delete data in the file. , time and date of creation, modification, last access , file size
File Types : One of the possible implementation techniques of file type is to include the type as an
File can be classified into various types based on the content - Executable(.exe), Text(.txt, .docx,
…etc), Image(.bmp, .png, .jpeg, …etc), Video (.vob, .flv, .swf,…etc) Audio (.wav, .mp3,…etc),
Compressed( .rar, .zip,…etc).
File Structure : A File Structure is a format that the operating system can understand.
machine.
File Systems : A file system is used to control how data is stored and retrieved. FAT and NTFS are
the types of file systems used in an operating system.
FAT is the file systems introduced with Microsoft Disk Operating System (MS DOS).
FAT uses a File Allocation Table (FAT) to keep track of files in the storage devices
FAT and the root directory reside at a fixed location of the volume so that the system's boot
NTFS (New Technology File System) is a proprietary file system developed by Microsoft. This is
improvement of FAT.
The capability to recover from some disk-related errors automatically, which FAT cannot.
Better security as permissions and encryptions are used to restrict access to specific files to
approved users.
File Security :
Authentication refers to identifying each user of the system and associating the executing
programs with those users. It is the responsibility of the Operating System to create a protection
system which ensures that a user who is running a particular program is authentic. Operating
Systems generally identifies/ authenticates users using following ways:
Username / Password - User need to enter a registered username and password with
attribute via designated input device used by operating system to login into the system.
Disk Fragmentation
Fragmentation is the unintentional division of Disk into many small free areas that cannot be used
Defragmentation is a process that locates and eliminates file fragments by rearranging them.
Space Allocation
Files are allocated disk spaces by operating system. Operating systems deploy following three
main ways to allocate disk space to files.
Contiguous Allocation
Linked Allocation
Indexed Allocation
Contiguous Allocation
Allocate disk space as a collection of adjacent/contiguous blocks. This technique needs to keep
track of unused disk space.
In the image shown below, there are three files in the directory. The starting block and the length
of each file are mentioned in the table. We can check in the table that the contiguous blocks are
assigned to each file as per its need.
Features:
Simple.
Easy Access.
File size is not known at the time of creation.
External fragmentation happens when there‟s a sufficient quantity of area within the memory to
satisfy the memory request of a method. However, the process‟s memory request cannot be
Internal fragmentation happens when the memory is split into mounted-sized blocks. Whenever
a method is requested for the memory, the mounted-sized block is allotted to the method. just in
case the memory allotted to the method is somewhat larger than the memory requested, then the
distinction between allotted and requested memory is that the internal fragmentation.
Linked Allocation
Linked List allocation solves all problems of contiguous allocation. In linked list allocation, each file
is considered as the linked list of disk blocks. However, the disks blocks allocated to a particular file
need not to be contiguous on the disk. Each disk block allocated to a file contains a pointer which
points to the next disk block allocated to the same file.
Advantages
2. Any free block can be utilized in order to satisfy the file block requests.
3. File can continue to grow as long as the free blocks are available.
Disadvantages
3. Any of the pointers in the linked list must not be broken otherwise the file will get
corrupted.
Instead of maintaining a file allocation table of all the disk pointers, Indexed allocation scheme
stores all the disk pointers in one of the blocks called as indexed block. Indexed block doesn't hold
the file data, but it holds the pointers to all the disk blocks allocated to that particular file. File ends
at nil pointer
Advantages
3. No external fragmentation
Disadvantages
2. Size of a file depends upon the number of pointers, a index block can hold.
Secondary storage is the non-volatile repository for both user and system data and programs.
Source program
Executable programs
Disk formatting
Formatting is the process of preparing a data storage device for initial use which may also create
one or more new file systems.
The first part of the formatting process that performs basic medium preparation is often referred
to as "low-level formatting”. Partitioning is the common term for the second part of the process,
making the data storage device visible to an operating system.
The third part of the process, usually termed "high-level formatting" most often refers to the
As file deletion is done by the operating system, data on a disk are not fully erased during every
high-level format. Instead, links to the files are deleted and the area on the disk containing the
Compaction is a process in which the free space is collected in a large memory chunk to make
some space available for processes. In memory management, swapping creates multiple fragments
in the memory because of the processes moving in and out. Compaction refers to combining all
the empty spaces together and processes.
Competency Level 5.3: Explores how an operating system manages processes in computers.
What is Process?
Type of processes
ID
Executable code
Data needed for execution
Normal termination,
Execution time-limit exceeded,
Interrupts
Interrupt is an event that alters the sequence of execution of process.
Interrupt can occur due to a time expiry an OS service request I/O completion.
For example when a disk driver has finished transferring the requested data, it generates an
interrupt to the OS to inform the OS that the task is over.
Interrupts occur asynchronously to the ongoing activity of the processor. Thus the times at which
interrupts occur are unpredictable.
Interrupt Handling
Generally I/O models are slower than CPU. After each I/O call, CPU has to sit idle until I/O device
complete the operation, and so processor saves the status of current process and executes some
other process. When I/O operation is over, I/O devices issue an interrupt to CPU then stores the
Process Management
In multiprogramming environment, the OS decides which process gets the processor when
and for how much time. This function is called process scheduling. An Operating System
does the following activities for processor managements:
Keeps tracks of processor and status of process. The program responsible for this
task is known as traffic controller.
Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or
move) to secondary storage (disk) and make that memory available to other processes. At some
later time, the system swaps back the process from the secondary storage to main memory.
Seven State Process Transition diagram
Created/New State
When a process is first created, it occupied the created or new state. In this state, the process waits
admission to the ready state. This admission is approved or delayed is done by the software called
Long Term Scheduler. The operating system‟s role is to manage the execution of existing and
newly created processes by moving them between the two states (Ready state, Swapped Out and
Ready State
The process that were at new state next go to this state. A process in the ready state has been
loaded in to main memory until it is executed by the CPU. This is an idle process which is always
ready to run, and never terminates. A ready queue or run queue is used in computer scheduling.
Modern computers are capable of running many different programs or processes at the same
time. However, the CPU is only capable of handling one process at a time. Processes that are ready
for the CPU are kept in a queue for "ready" processes. Other processes that are waiting for an
event to occur, such as loading information from a hard drive or waiting on an internet connection,
are not in the ready queue.
Running State
This state is called as active or execution state which is executed in the CPU. When in this state, if
the process exceeds its allocated time period, it may be context switch and back to ready state or
temporarily move to blocked state.
Blocked state
A process transitions to a blocked state when it is waiting for some event, such as a resource
becoming available or the completion of an I/O operation. In a multitasking computer system,
individual processes, must share the resources of the system. This state is called as Sleeping state.
When a process comes to this state, it is removed from the CPU and retain in the main memory. A
process have to remain “blocked” until their resources become available. Once the resource are
obtained, the blocked process move to ready state and then to running state.
Terminated/Exit state
A process may be terminated either from the running state by completing its execution or killed.
Normally these processes are removed from the main memory and the processes which were not
removed are called “zombies”.
Normal completion
Memory unavailable
If a process in ready state, for a longer time that process moves to the virtual memory in order to
provide space for other high priority. The state of this process needs to be resumed, when the
main memory is ok. Then the state is changed again to ready state.
If the main memory is too loaded or to give the space to high priority processes, the processes
with block state in main memory are moved to virtual memory as swapped out and blocked state.
The state of this process needs to be resumed, when the main memory is ok. Then the state is
changed again to blocked state.
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to
keep track of a process as listed below in the table:
1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2 Process ID
Unique identification for each of the process in the operating system.
3 Program Counter
Program Counter is a pointer to the address of the next instruction to be
executed for this process.
4 CPU registers
Various CPU registers where process need to be stored for execution for
running state.
5 Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.
6 IO status information
This includes a list of I/O devices allocated to the process.
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates. Generally I/O models are slower than CPU. After each I/O call, CPU has to sit idle until
I/O device complete the operation, and so processor saves the status of current process and
executes some other process. When I/O operation is over, I/O devices issue an interrupt to CPU
then stores the original process and reserves execution.
Context Switching
A context switch is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a
later time.
Using this technique a context switcher enables multiple processes to share a single CPU.
context switcher saves the content of all processor registers for the process being removed
from the CPU, in its process control block.
Context switching can significantly affect performance as modern computers have a lot of
general and status registers to be saved.
Types of Scheduling
Long-term scheduling (Job scheduling): It determines which programs are admitted to the
system for processing. Job scheduler selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.
processes.
Medium-Term Scheduling
Short-Term Scheduling
Determines which process is going to execute next (also called CPU scheduling).
The short term scheduler is known as the dispatcher.
Scheduler Comparison
scheduler
the memory for execution execute for dispatching and execution can be
continued.
multiprogramming
Turnaround time : Time required for a particular process to complete, from submission time
to completion.
Response time : The time taken in an interactive program from the issuance of a command
to the commence of a response to that command.
Throughput : Number of processes completed per unit time. May range from 10 / second to
1 / hour depending on the specific processes.
Waiting time : How much time a process spends in the ready queue waiting its turn to get
on the CPU.
Scheduling Policies
Non-preemptive : Once a process is in the running state, it will continue until it terminates or
OS. Allows for better service since any one process cannot monopolize the processor for very long.
Scheduling Algorithms
There are various algorithms which are used by the Operating System to schedule the processes
on the processor in an efficient way.
3. Maximum throughput
There are the following algorithms which can be used to schedule the jobs.
First Come First Serve algorithm
First come first serve (FCFS) scheduling algorithm simply schedules the jobs according to their
arrival time. The job which comes first in the ready queue will get the CPU first. The lesser the
arrival time of the job, the sooner will the job get the CPU. FCFS scheduling may cause the
problem of starvation if the burst time of the first process is the longest among all the jobs.
Disadvantages of FCFS
1. The scheduling method is non-preemptive, the process will run to the completion.
2. Due to the non-preemptive nature of the algorithm, the problem of starvation may occur.
3. Although it is easy to implement, but it is poor in performance since the average waiting
In the above example, you can see that we have three processes P1, P2, and P3, and they are
coming in the ready state at 0ms, 2ms, and 2ms respectively. So, based on the arrival time, the
process P1 will be executed for the first 18ms. After that, the process P2 will be executed for 7ms
and finally, the process P3 will be executed for 10ms. One thing to be noted here is that if the
arrival time of the processes is the same, then the CPU can select any process.
SJF scheduling algorithm, schedules the processes according to their burst time.
In SJF scheduling, the process with the lowest burst time, among the list of available processes in
the ready queue, is going to be scheduled next.
However, it is very difficult to predict the burst time needed for a process hence this algorithm is
In the above example, at 0ms, we have only one process i.e. process P2, so the process P2 will be
executed for 4ms. Now, after 4ms, there are two new processes i.e. process P1 and process P3. The
burst time of P1 is 5ms and that of P3 is 2ms. So, amongst these two, the process P3 will be
executed first because its burst time is less than P1. P3 will be executed for 2ms. Now, after 6ms,
we have two processes with us i.e. P1 and P4 (because we are at 6ms and P4 comes at 5ms).
Amongst these two, the process P4 is having a less burst time as compared to P1. So, P4 will be
executed for 4ms and after that P1 will be executed for 5ms. So, the waiting time and turnaround
time of these processes will be:
P1 7 ms 12 ms
P2 0 ms 4 ms
P3 0 ms 2 ms
P4 1 ms 5 ms
Total waiting time: (7 + 0 + 0 + 1) = 8ms
Round Robin scheduling algorithm is one of the most popular scheduling algorithm which can
actually be implemented in most of the operating systems. This is the preemptive version of first
come first serve scheduling. The Algorithm focuses on Time Sharing. In this algorithm, every
process gets executed in a cyclic way. A certain time slice is defined in the system which is called
time quantum. Each process present in the ready queue is assigned the CPU for that time
quantum, if the execution of the process is completed during that time then the process
will terminate else the process will go back to the ready queue and waits for the next turn to
In the above example, every process will be given 2ms in one turn because we have taken the time
quantum to be 2ms. So process P1 will be executed for 2ms, then process P2 will be executed for
2ms, then P3 will be executed for 2 ms. Again process P1 will be executed for 2ms, then P2, and so
on. The waiting time and turnaround time of the processes will be:
Process Waiting Time Turnaround Time
P1 13 ms 23 ms
P2 10 ms 15 ms
P3 13 ms 21 ms
In Priority scheduling, there is a priority number assigned to each process. In some systems, the
lower the number, the higher the priority. While, in the others, the higher the number, the higher
will be the priority. The Process with the higher priority among the available processes is given the
CPU. There are two types of priority scheduling algorithm exists. One is Preemptive priority
scheduling while the other is Non Preemptive Priority scheduling.
In the above example, at 0ms, we have only one process P1. So P1 will execute for 5ms because we
are using non-preemption technique here. After 5ms, there are three processes in the ready state
i.e. process P2, process P3, and process P4. Out to these three processes, the process P4 is having
the highest priority so it will be executed for 6ms and after that, process P2 will be executed for
3ms followed by the process P1. The waiting and turnaround time of processes will be:
P1 0 ms 5 ms
P2 10 ms 13 ms
P3 12 ms 20 ms
P4 2 ms 8 ms
In multilevel queue scheduling, all the processes are assigned permanently to the queue at the
time of entry. Processes will not move between queues and it may happen that the processes in
the queue can be divided into different classes where classes have their own scheduling. For
example, interactive process and batch process.
The main advantage of multilevel queue scheduling is that the processes are permanently assigned
to the queue.
Let us see another example of multilevel queue scheduling with five queues with different
priorities:
System processes
Interactive processes
Interactive editing processes
Batch processes
User processes
Each upper-level queue has absolute priority over the lower-level queue. For example, if interactive
editing process entered the ready queue, then currently running batch process will be preempted.
Competency Level 5.4: Explores how an operating system manages the resources
Memory Management
primary memory and moves processes back and forth between main memory and disk during
execution. Memory management keeps track of each and every memory location, regardless of
either it is allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It decides which process will get memory at what time. It tracks whenever some
Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not
in use.
In multiprogramming, the OS decides which process will get memory when and how much.
process at the time it is sent to memory. The user program deals with logical addresses; it never
MMU uses the following mechanism to convert virtual address to physical address.
The value in the base register is added to every address generated by a user process, which
is treated as offset at the time it is sent to memory. For example, if the base register value is
10000, then an attempt by the user to use address location 100 will be dynamically
reallocated to location 10100.
Paging
Internal fragmentation
Mapping
The operating system takes care of mapping the logical addresses to physical addresses at the
time of memory allocation to the program. The runtime mapping from virtual to physical address
Page table
A page table is the data structure used by a virtual memory system in a computer operating
system to store the mapping between virtual addresses and physical addresses. Logical addresses
are generated by the CPU for the pages of the processes therefore they are generally used by the
processes. Physical addresses are the actual frame address of the memory. They are generally used
A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to
The main visible advantage of this scheme is that programs can be larger than physical memory.
Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by
using disk. Second, it allows us to have memory protection, because each virtual address is
translated to a physical address.
Virtual memory is partitioned in to equal size pages. Main memory is also partitions into equal size
page frames.
Run partially loaded programs – Entire program need not to be in memory all the time.
Degree of Multiprogramming: Many programs simultaneously reside in memory.
Permit sharing of memory segments or regions. For example, read-only code segments
should be shared between program instances.
Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number
Page offset(d): Number of bits required to represent particular word in a page or page size of
Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number.
Frame offset(d): Number of bits required to represent particular word in a frame or frame size of
The following list of formulas is very useful for solving the numerical problems based on
paging.
If number of frames in main memory = 2X, then number of bits in frame number = X bits
If Page size = 2X Bytes, then number of bits in page offset = X bits
If size of main memory = 2X Bytes, then number of bits in physical address = X bits
Note :
In general, if the given address consists of „n‟ bits, then using „n‟ bits, 2n locations are possible.
Then, size of memory = 2n x Size of one location. If the memory is byte-addressable, then size of
one location = 1 byte. Thus, size of memory = 2n bytes.
If the memory is word-addressable where 1 word = m bytes, then size of one location = m bytes.
Q1. Calculate the size of memory if its address consists of 22 bits and the memory is 2-byte
addressable.
= 222 x 2 bytes
= 223 bytes
= 8 MB
Q2. Calculate the number of bits required in the address for memory having size of 16 GB. Assume
the memory is 4-byte addressable.
Let „n‟ number of bits are required. Then, Size of memory = 2n x 4 bytes.
2n x 4 bytes = 16 GB
2n x 4 = 16 GB
2n x 22 = 234
2n = 232
∴ n = 32 bits
Q3. A computer has an 18-bit virtual memory address space where six bits are used for a page
address, calculate the total number of pages defined by the above addressing scheme. Consider
the following virtual memory address – 010111000000111100. What is page and
010111 | 000000111100
Page Offset
Q4. In a certain computer, the physical memory has a total capacity of 4GB. The size of a memory
frame is 4 KB. Compute the total number of frames in the physical memory.
220 Frames
Q5. If a computer system is byte addressable and uses 32-bit addresses to access any byte in its
memory. what is the maximum usable size of its memory in Giga Bytes (GB)? Show all your
workings clearly.
Q6. A 32-bit computer has a byte addressable main memory. The computer uses 32-bit address to
access any byte in its memory. It is observed that a maximum of 4 GB memory is available for a
process even after the main memory is replaced by an 8 GB memory. Explain, with the calculations,
why this happens.
Device driver
Device driver is utility software. The computer communicates with peripheral devices through
device drivers. A driver provides a software interface to hardware devices, enabling operating
systems and other computer programs to access hardware functions without knowing the precise
hardware details. Device drivers depends on both the hardware and the operating system loaded
in to the computer.
Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting
data of various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is
accessible to I/O devices. An operating system does the following activities related to distributed
environment:
Handles I/O device data spooling as devices have different data access rates.
Maintains the spooling buffer which provides a waiting station where data can rest while
the slower device catches up.
Maintains parallel computation because of spooling process as a computer can perform I/O
in parallel fashion. It becomes possible to have the computer read data from a tape, write
data to disk and to write out to a tape printer while it is doing its computing task.
Example -
In print spooling, documents are loaded into a buffer (usually an area on a disk), and then
the printer pulls them off the buffer at its own rate.
Because the documents are in a buffer where they can be accessed by the printer, you can
perform other operations on the computer while the printing takes place in the
background.
Spooling also lets you place a number of print jobs on a queue instead of waiting for each
one to finish before specifying the next one
Advantages