0% found this document useful (0 votes)
16 views

Lesson 05 -Operating Systems

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Lesson 05 -Operating Systems

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Operating Systems

A/L ICT - Lesson 05

An operating system is the most important software that runs on a computer. It manages
the computer's memory and processes, as well as all of its software and hardware. It also
allows you to communicate with the computer without knowing how to speak the
computer's language. Without an operating system, a computer is useless.
Competency 5

Uses operating systems to manage the functionality of computers

Competency Level 5.1: Defines the term computer operating system (OS) and investigates its

need in computer systems.

A Computer consists of hardware, firmware and software. Any physical component of a computer
system with a definite shape is called a hardware. Examples of hardware include: mouse, keyboard,

display unit, hard disk, speaker, printer etc. The booting instructions stored in the ROM (Read Only
Memory) are called firmware. The initial text information displayed on the screen are displayed by

firmware.

How the initial operations of a computer are performed

1. When the user powers up the computer the CPU (Central Processing Unit) activates the
BIOS (Basic Input Output System).

2. The first program activated is POST (Power On Self-Test). Using the CMOS (Complementary
Metal Oxide Semiconductor) memory this checks all the hardware and confirms that all are

functioning properly.
3. After that it reads the MBR (Master Boot Record) in boot drive in accordance with the

firmware „bootstrap loader‟ which is provided by the computer manufacturer


4. Then the computer loads in the Operating System in boot drive to the RAM (Random

Access Memory)

5. Once this is performed the Operating System takes over the control of the computer and
displays a user interface to the user.

This whole process is called booting which means that an Operating System is loaded into the
RAM (main memory).

Software is a set of instructions given to the computer to perform some activity using a computer.

There are many types of software. They can be broadly classified as follows:
1. System Software : System software are generally divided into three types. They are:

a. Operating System – The Operating System provides for the user to utilize the functions of
a computer by managing the hardware and software in it. The image below depicts how

the system software and application software interact with the hardware.

b. Utility Software – These are used to manage and analyze the software in the computer.

The utility software differ from the application software in their complexity and operational
activities. Utility software helps in managing the resources of the computer. However, the

application software function in different to the utility software. There are many utility
software which dedicated to perform certain functions. Some of them are mentioned below:

 Anti -Virus Software – to protect the computer from virus infections

 Disk Formatting – to prepare the storage device in order to save the files and folders
 Disk defragmenters - detect computer files whose contents are scattered across

several locations on the hard disk and collect the fragments into one contiguous
area.

 Disk cleaners - find files that are unnecessary to computer operation, or take up
considerable amounts of space.

c. Language Translators - A computer program (software) is made up by using a set of


instruction codes. These instructions are written in high level languages which are very close

to the human languages. These high level languages are translated into machine language
(i.e 0‟s and 1‟s) which are understood by the computer by language translators. assembler,

compiler and interpreter are examples for language translators.


 Compiler – converts the whole program in one session and reports errors

detected after the conversion.


 Intepreter – converts the program one line of code at a time and reports errors

when detected.
 Assembler - translate assembly language into machine language.

2. Application Software

The application software which runs on the Operating System is used to carry out computer based
activities of the user such as creating documents, mathematical functions, data entry and computer

games.

Introduction to Computer operating system

The software which facilitate the interaction between human user and hardware is the Operating

System. The Operating System provides instructions for installation and management of various
application software. Not only that the Operating System manages all the input, output and

computer memory too, which means that Operating System is the sole software which manages

the whole computer system. It provides a virtual machine(hides hardware details, provides an
interface to applications and end users), manages computing resources (keeps track of resource

usage, grants/revokes permissions for resources), and executes application software.


Evolution of Operating System

1) No OS (late 1940s – mid 1950s)

 Serial Processing – processed programs one after another

 Single user system.


 Programmer/User directly interacted with the hardware.

 No operating system.
 Programs loaded directly into computer

 Machines run from a console with display lights, toggle switches.


Features:

Manual Program Scheduling, Uniprogramming, processor sat idle when loading


programs and doing I/O

2) Simple Batch System

 Introduced to maximize the processor utilization


 Programs recorded in a magnetic tape with an inexpensive machine

 OS loaded and executed programs in tape one at a time


 When the current program ended execution, its output was written to another tape

and OS loaded next program


 At the end of entire batch of programs, output tape was printed with an inexpensive

machine
Features :

No direct access to hardware, Uniprogramming, High response time, Processor sat


idle during I/O

3) Multi-Programmed batch Systems

 Central theme of modern OS

 Introduce in 3rd generation to minimize the processor idle time during I/O
 Memory is partitioned to hold multiple programs

 When current program waiting for I/O, OS switches processor to execute another
program in memory
 If memory is large enough to hold more programs, processor could keep 100% busy

4) Time Sharing System

 Introduced to minimize the response time and maximize the user interaction during

program execution
 Uses context switching

 Enables to share the processor time among multiple programs


 Rapidly switching among programs, credits illusion of concurrent execution of multiple

programs

Following are some of important functions of an operating System.

 Process Management
 Resource Management ,(Memory, I/O device, Storage)

 User Interfacing
 Security Protection

Different types of Operating Systems(Based on the users and Tasks)

1. Different types of Operating Systems(Based on the users) :

• Single user - Facilitates single user to use the system at a time

• Multi User-Facilitates multiple users to use the system at a time

2. Different types of Operating Systems(Based on Number of tasks) :

 Single Task – Executes only one program at a time


 Multi Task - Executes multiple programs at a time
Different types of Operating Systems

 Single user-single task– A single task is performed by one user at a time.


 Single user-Multi task- Several programs are run at the same time by a single user.

 Multi user-Multi task – A multi-user operating system has been designed for more than
one user to access the computer at the same or different time.

 Multi-threading – A thread is also called a sub process. Threads provide a way to improve
application performance through the parallel execution of sub process.

 Real Time – OS is designed to run applications with very precise timing and with a high
degree of reliability. The main objective of real-time operating systems is their quick and

predictable response to events. These types of OS are needed in situations where downtime
is costly or a program delay could cause a safety hazard. Examples of Real-Time

Operating Systems are: Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.

 Time Sharing Systems – Processor‟s time is shared among multiple users/applications.


Provides quick response. Reduces CPU idle time

Competency Level 5.2: Explores how an operating system manages directories/folders and
files in computers.

Files : A file is a named collection of related information, usually a sequence of bytes.

A file can be viewed in two different ways.

1. Logical (programmer‟s) view: how the users see the file.

 Liners collection of records.


 Image File – cells(pixels) of intensity values

 Linear sequence of bytes.


2. Physical (operating system) view: how the file is stored on secondary storage.

File Attributes : file name , type (e.g., source, data, executable) , Owner , location(s) on the secondary
storage. , organization (e.g. sequential, indexed, random) , access permissions – who is permitted to

read/write/delete data in the file. , time and date of creation, modification, last access , file size
File Types : One of the possible implementation techniques of file type is to include the type as an

extension to the file name.

File can be classified into various types based on the content - Executable(.exe), Text(.txt, .docx,

…etc), Image(.bmp, .png, .jpeg, …etc), Video (.vob, .flv, .swf,…etc) Audio (.wav, .mp3,…etc),
Compressed( .rar, .zip,…etc).

Directory and file organization

Directories are continues used to organize files logically.

File Structure : A File Structure is a format that the operating system can understand.

 A file has a certain defined structure according to its type.

 A text file is a sequence of characters organized into lines.


 An object file is a sequence of bytes organized into blocks that are understandable by the

machine.

File Systems : A file system is used to control how data is stored and retrieved. FAT and NTFS are
the types of file systems used in an operating system.

FAT (File Allocation Table)

 FAT is the file systems introduced with Microsoft Disk Operating System (MS DOS).

 FAT uses a File Allocation Table (FAT) to keep track of files in the storage devices
 FAT and the root directory reside at a fixed location of the volume so that the system's boot

files can be correctly located.


 To protect a volume, two copies of the FAT are kept.

NTFS (New Technology File System) is a proprietary file system developed by Microsoft. This is
improvement of FAT.

This improvement includes,

 The capability to recover from some disk-related errors automatically, which FAT cannot.

 Support with Unicode encoding system


 Improved support for larger hard disks.

 Better security as permissions and encryptions are used to restrict access to specific files to
approved users.
File Security :

Authentication refers to identifying each user of the system and associating the executing
programs with those users. It is the responsibility of the Operating System to create a protection

system which ensures that a user who is running a particular program is authentic. Operating
Systems generally identifies/ authenticates users using following ways:

 Username / Password - User need to enter a registered username and password with

Operating system to login into the system.


 User attribute - fingerprint/ eye retina pattern/ signature - User need to pass his/her

attribute via designated input device used by operating system to login into the system.

Disk Fragmentation

Fragmentation is the unintentional division of Disk into many small free areas that cannot be used

effectively due to s Defragmentation

Defragmentation is a process that locates and eliminates file fragments by rearranging them.

File Storage Management

Space Allocation

Files are allocated disk spaces by operating system. Operating systems deploy following three
main ways to allocate disk space to files.

 Contiguous Allocation

 Linked Allocation
 Indexed Allocation
Contiguous Allocation

Allocate disk space as a collection of adjacent/contiguous blocks. This technique needs to keep
track of unused disk space.

In the image shown below, there are three files in the directory. The starting block and the length

of each file are mentioned in the table. We can check in the table that the contiguous blocks are
assigned to each file as per its need.

Features:

 Simple.

 Easy Access.
 File size is not known at the time of creation.

 Extending file size is difficult.


 External fragmentation (free unusable space between allocation).

External fragmentation happens when there‟s a sufficient quantity of area within the memory to
satisfy the memory request of a method. However, the process‟s memory request cannot be

fulfilled because the memory offered is in a non-contiguous manner.

Internal fragmentation happens when the memory is split into mounted-sized blocks. Whenever

a method is requested for the memory, the mounted-sized block is allotted to the method. just in
case the memory allotted to the method is somewhat larger than the memory requested, then the

distinction between allotted and requested memory is that the internal fragmentation.
Linked Allocation

Linked List allocation solves all problems of contiguous allocation. In linked list allocation, each file
is considered as the linked list of disk blocks. However, the disks blocks allocated to a particular file

need not to be contiguous on the disk. Each disk block allocated to a file contains a pointer which
points to the next disk block allocated to the same file.

Advantages

1. There is no external fragmentation with linked allocation.

2. Any free block can be utilized in order to satisfy the file block requests.

3. File can continue to grow as long as the free blocks are available.

4. Directory entry will only contain the starting block address.

Disadvantages

1. Random Access is not provided.

2. Pointers require some space in the disk blocks.

3. Any of the pointers in the linked list must not be broken otherwise the file will get

corrupted.

4. Need to traverse each block.


Indexed Allocation

Instead of maintaining a file allocation table of all the disk pointers, Indexed allocation scheme

stores all the disk pointers in one of the blocks called as indexed block. Indexed block doesn't hold
the file data, but it holds the pointers to all the disk blocks allocated to that particular file. File ends

at nil pointer

Advantages

1. Supports direct access

2. A bad data block causes the lost of only that block.

3. No external fragmentation

Disadvantages

1. A bad index block could cause the lost of entire file.

2. Size of a file depends upon the number of pointers, a index block can hold.

3. Having an index block for a small file is totally wastage.

4. More pointer overhead


Maintenance of Secondary storage

Secondary storage is the non-volatile repository for both user and system data and programs.

Secondary storage is typically used to store

 Source program
 Executable programs

 Data for the program


 Temporary data

Disk formatting

Formatting is the process of preparing a data storage device for initial use which may also create
one or more new file systems.

The first part of the formatting process that performs basic medium preparation is often referred

to as "low-level formatting”. Partitioning is the common term for the second part of the process,
making the data storage device visible to an operating system.

The third part of the process, usually termed "high-level formatting" most often refers to the

process of generating a new file system.

Recovery of data from a formatted disk

As file deletion is done by the operating system, data on a disk are not fully erased during every
high-level format. Instead, links to the files are deleted and the area on the disk containing the

data is retains until it is overwritten.

Compaction is a process in which the free space is collected in a large memory chunk to make
some space available for processes. In memory management, swapping creates multiple fragments
in the memory because of the processes moving in and out. Compaction refers to combining all
the empty spaces together and processes.
Competency Level 5.3: Explores how an operating system manages processes in computers.

What is Process?

 Process is a fundamental concept in modern operating systems.

 A process is basically a program in execution.


 Process is not a program. A program may have many processes.

Type of processes

 I/O bound processes


 Processor bound processes

The process must have (at least):

 ID

 Executable code
 Data needed for execution

 Execution context (PC, priorities, waiting for I/O or not)

Reasons for process creation:

 New batch job

 User starts a program


 OS creates process to provide a service

 Running program starts another process

Reasons for process termination:

 Normal termination,
 Execution time-limit exceeded,

 A resource requested is unavailable,


 An execution error

 A memory access violation,


 An operating system or parent process request

 Parent process has terminated.


These and many other events may either terminate the process, or simply return an error
indication to the running process. In all cases, the operating system will provide a default

action which may or may not be process termination.

Interrupts
 Interrupt is an event that alters the sequence of execution of process.
 Interrupt can occur due to a time expiry an OS service request I/O completion.
 For example when a disk driver has finished transferring the requested data, it generates an
interrupt to the OS to inform the OS that the task is over.
 Interrupts occur asynchronously to the ongoing activity of the processor. Thus the times at which
interrupts occur are unpredictable.

Interrupt Handling

Generally I/O models are slower than CPU. After each I/O call, CPU has to sit idle until I/O device

complete the operation, and so processor saves the status of current process and executes some
other process. When I/O operation is over, I/O devices issue an interrupt to CPU then stores the

original process and reserves execution.

Process Management
In multiprogramming environment, the OS decides which process gets the processor when

and for how much time. This function is called process scheduling. An Operating System
does the following activities for processor managements:

 Keeps tracks of processor and status of process. The program responsible for this
task is known as traffic controller.

 Allocates the processor (CPU) to a process.


 De-allocates processor when a process is no longer required

Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or

move) to secondary storage (disk) and make that memory available to other processes. At some
later time, the system swaps back the process from the secondary storage to main memory.
Seven State Process Transition diagram

Created/New State

When a process is first created, it occupied the created or new state. In this state, the process waits
admission to the ready state. This admission is approved or delayed is done by the software called

Long Term Scheduler. The operating system‟s role is to manage the execution of existing and
newly created processes by moving them between the two states (Ready state, Swapped Out and

Ready state) until they finish.

Ready State

The process that were at new state next go to this state. A process in the ready state has been
loaded in to main memory until it is executed by the CPU. This is an idle process which is always
ready to run, and never terminates. A ready queue or run queue is used in computer scheduling.
Modern computers are capable of running many different programs or processes at the same
time. However, the CPU is only capable of handling one process at a time. Processes that are ready
for the CPU are kept in a queue for "ready" processes. Other processes that are waiting for an
event to occur, such as loading information from a hard drive or waiting on an internet connection,
are not in the ready queue.
Running State

This state is called as active or execution state which is executed in the CPU. When in this state, if
the process exceeds its allocated time period, it may be context switch and back to ready state or
temporarily move to blocked state.

Blocked state

A process transitions to a blocked state when it is waiting for some event, such as a resource
becoming available or the completion of an I/O operation. In a multitasking computer system,
individual processes, must share the resources of the system. This state is called as Sleeping state.
When a process comes to this state, it is removed from the CPU and retain in the main memory. A
process have to remain “blocked” until their resources become available. Once the resource are
obtained, the blocked process move to ready state and then to running state.

Terminated/Exit state

A process may be terminated either from the running state by completing its execution or killed.
Normally these processes are removed from the main memory and the processes which were not
removed are called “zombies”.

There are many reasons for process termination:

 Batch job issues halt instruction

 Parent terminates so child processes terminate (cascading termination).

 Error and fault conditions

 Normal completion

 Time limit exceeded

 Memory unavailable

Swapped out and waiting

If a process in ready state, for a longer time that process moves to the virtual memory in order to
provide space for other high priority. The state of this process needs to be resumed, when the
main memory is ok. Then the state is changed again to ready state.

Swapped out and blocked

If the main memory is too loaded or to give the space to high priority processes, the processes
with block state in main memory are moved to virtual memory as swapped out and blocked state.
The state of this process needs to be resumed, when the main memory is ok. Then the state is
changed again to blocked state.
Process Control Block (PCB)

A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to
keep track of a process as listed below in the table:

NO Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
2 Process ID
Unique identification for each of the process in the operating system.

3 Program Counter
Program Counter is a pointer to the address of the next instruction to be
executed for this process.
4 CPU registers
Various CPU registers where process need to be stored for execution for
running state.
5 Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.
6 IO status information
This includes a list of I/O devices allocated to the process.

The PCB is maintained for a process throughout its lifetime, and is deleted once the process

terminates. Generally I/O models are slower than CPU. After each I/O call, CPU has to sit idle until
I/O device complete the operation, and so processor saves the status of current process and

executes some other process. When I/O operation is over, I/O devices issue an interrupt to CPU
then stores the original process and reserves execution.

Context Switching

 A context switch is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a

later time.
 Using this technique a context switcher enables multiple processes to share a single CPU.

Context switching is an essential part of a multitasking operating system features.


 When the scheduler switches the CPU from executing one process to execute another, the

context switcher saves the content of all processor registers for the process being removed
from the CPU, in its process control block.

 Context switch time is pure overhead.

 Context switching can significantly affect performance as modern computers have a lot of
general and status registers to be saved.

Types of Scheduling

 Long-term scheduling (Job scheduling): It determines which programs are admitted to the
system for processing. Job scheduler selects processes from the queue and loads them into

memory for execution. Process loads into the memory for CPU scheduling.

 Medium-term scheduling: Medium term scheduling is in charge of swapping processes


between the main memory and the secondary storage.

 Short-term scheduling (low-level scheduling): Determines which ready process will be


assigned the CPU when it next becomes available.
Long-term scheduling (Job scheduling)

 Determines which processes are admitted to the system for processing


 Controls the degree of multiprogramming: If more processes are admitted , better CPU

usage, less likely that all processes will be blocked.


 The long term scheduler may attempt to keep a mix of processor-bound and I/O bound

processes.

Medium-Term Scheduling

 Swapping decisions based on the need to manage multiprogramming.


 Done by memory management software.

Short-Term Scheduling

 Determines which process is going to execute next (also called CPU scheduling).
 The short term scheduler is known as the dispatcher.

 Dispatching the CPU to the process.

Scheduler Comparison

Long Term Scheduler Short Term Scheduler Medium Term Scheduler

Job Scheduler CPU scheduler Processes swapping

scheduler

Selects processes from a Selects those processes Swapped out/Re-introduces


pool and loads them into which are ready to the processes into memory

the memory for execution execute for dispatching and execution can be
continued.

Controls the degree of Provides lesser control Controls the degree of


multiprogramming over the degree of multiprogramming

multiprogramming

Speed is lesser than Speed is fastest among Speed is in between (short


short term scheduler other two and

long term schedulers)


Process Schedulers

Assigning the processor to the processes.

 Turnaround time : Time required for a particular process to complete, from submission time
to completion.

 Response time : The time taken in an interactive program from the issuance of a command
to the commence of a response to that command.

 Throughput : Number of processes completed per unit time. May range from 10 / second to
1 / hour depending on the specific processes.

 Waiting time : How much time a process spends in the ready queue waiting its turn to get
on the CPU.

 Burst Time: Time required by a process for CPU execution.

Scheduling Policies
Non-preemptive : Once a process is in the running state, it will continue until it terminates or

blocks itself for I/O.


Preemptive : Currently running process may be interrupted and moved to the Ready state by the

OS. Allows for better service since any one process cannot monopolize the processor for very long.

Scheduling Algorithms

There are various algorithms which are used by the Operating System to schedule the processes
on the processor in an efficient way.

The Purpose of a Scheduling algorithm

1. Maximum CPU utilization

2. Fare allocation of CPU

3. Maximum throughput

4. Minimum turnaround time

5. Minimum waiting time

6. Minimum response time

There are the following algorithms which can be used to schedule the jobs.
First Come First Serve algorithm

First come first serve (FCFS) scheduling algorithm simply schedules the jobs according to their
arrival time. The job which comes first in the ready queue will get the CPU first. The lesser the
arrival time of the job, the sooner will the job get the CPU. FCFS scheduling may cause the
problem of starvation if the burst time of the first process is the longest among all the jobs.

Disadvantages of FCFS

1. The scheduling method is non-preemptive, the process will run to the completion.

2. Due to the non-preemptive nature of the algorithm, the problem of starvation may occur.

3. Although it is easy to implement, but it is poor in performance since the average waiting

time is higher as compare to other scheduling algorithms.

In the above example, you can see that we have three processes P1, P2, and P3, and they are
coming in the ready state at 0ms, 2ms, and 2ms respectively. So, based on the arrival time, the
process P1 will be executed for the first 18ms. After that, the process P2 will be executed for 7ms
and finally, the process P3 will be executed for 10ms. One thing to be noted here is that if the
arrival time of the processes is the same, then the CPU can select any process.

Process Waiting Time Turnaround Time


P1 0 ms 18 ms
P2 16 ms 23 ms
P3 23 ms 33 ms

Average waiting time: (39/3) = 13ms

Average turnaround time: (74/3) = 24.66ms


Shortest Job First (SJF) Scheduling

SJF scheduling algorithm, schedules the processes according to their burst time.

In SJF scheduling, the process with the lowest burst time, among the list of available processes in
the ready queue, is going to be scheduled next.

However, it is very difficult to predict the burst time needed for a process hence this algorithm is

very difficult to implement in the system.

In the above example, at 0ms, we have only one process i.e. process P2, so the process P2 will be
executed for 4ms. Now, after 4ms, there are two new processes i.e. process P1 and process P3. The

burst time of P1 is 5ms and that of P3 is 2ms. So, amongst these two, the process P3 will be
executed first because its burst time is less than P1. P3 will be executed for 2ms. Now, after 6ms,

we have two processes with us i.e. P1 and P4 (because we are at 6ms and P4 comes at 5ms).
Amongst these two, the process P4 is having a less burst time as compared to P1. So, P4 will be

executed for 4ms and after that P1 will be executed for 5ms. So, the waiting time and turnaround
time of these processes will be:

Process Waiting Time Turnaround Time

P1 7 ms 12 ms

P2 0 ms 4 ms

P3 0 ms 2 ms

P4 1 ms 5 ms
Total waiting time: (7 + 0 + 0 + 1) = 8ms

Average waiting time: (8/4) = 2ms

Total turnaround time: (12 + 4 + 2 + 5) = 23ms

Average turnaround time: (23/4) = 5.75ms

Round Robin Scheduling Algorithm

Round Robin scheduling algorithm is one of the most popular scheduling algorithm which can

actually be implemented in most of the operating systems. This is the preemptive version of first
come first serve scheduling. The Algorithm focuses on Time Sharing. In this algorithm, every

process gets executed in a cyclic way. A certain time slice is defined in the system which is called
time quantum. Each process present in the ready queue is assigned the CPU for that time

quantum, if the execution of the process is completed during that time then the process
will terminate else the process will go back to the ready queue and waits for the next turn to

complete the execution.

In the above example, every process will be given 2ms in one turn because we have taken the time
quantum to be 2ms. So process P1 will be executed for 2ms, then process P2 will be executed for
2ms, then P3 will be executed for 2 ms. Again process P1 will be executed for 2ms, then P2, and so
on. The waiting time and turnaround time of the processes will be:
Process Waiting Time Turnaround Time

P1 13 ms 23 ms

P2 10 ms 15 ms

P3 13 ms 21 ms

Total waiting time: (13 + 10 + 13) = 36ms

Average waiting time: (36/3) = 12ms

Total turnaround time: (23 + 15 + 21) = 59ms

Average turnaround time: (59/3) = 19.66ms

Priority Scheduling Algorithm

In Priority scheduling, there is a priority number assigned to each process. In some systems, the

lower the number, the higher the priority. While, in the others, the higher the number, the higher
will be the priority. The Process with the higher priority among the available processes is given the

CPU. There are two types of priority scheduling algorithm exists. One is Preemptive priority
scheduling while the other is Non Preemptive Priority scheduling.
In the above example, at 0ms, we have only one process P1. So P1 will execute for 5ms because we

are using non-preemption technique here. After 5ms, there are three processes in the ready state
i.e. process P2, process P3, and process P4. Out to these three processes, the process P4 is having

the highest priority so it will be executed for 6ms and after that, process P2 will be executed for
3ms followed by the process P1. The waiting and turnaround time of processes will be:

Process Waiting Time Turnaround Time

P1 0 ms 5 ms

P2 10 ms 13 ms

P3 12 ms 20 ms

P4 2 ms 8 ms

Total waiting time: (0 + 10 + 12 + 2) = 24ms

Average waiting time: (24/4) = 6ms

Total turnaround time: (5 + 13 + 20 + 8) = 46ms

Average turnaround time: (46/4) = 11.5ms

Multilevel Queue Scheduling

In multilevel queue scheduling, all the processes are assigned permanently to the queue at the
time of entry. Processes will not move between queues and it may happen that the processes in

the queue can be divided into different classes where classes have their own scheduling. For
example, interactive process and batch process.

The main advantage of multilevel queue scheduling is that the processes are permanently assigned

to the queue.
Let us see another example of multilevel queue scheduling with five queues with different
priorities:

 System processes
 Interactive processes
 Interactive editing processes
 Batch processes
 User processes

Each upper-level queue has absolute priority over the lower-level queue. For example, if interactive
editing process entered the ready queue, then currently running batch process will be preempted.
Competency Level 5.4: Explores how an operating system manages the resources

Memory Management

Memory management is the functionality of an operating system which handles or manages

primary memory and moves processes back and forth between main memory and disk during
execution. Memory management keeps track of each and every memory location, regardless of

either it is allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It decides which process will get memory at what time. It tracks whenever some

memory gets freed or unallocated and correspondingly it updates the status.

An Operating System does the following activities for memory management:

 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not

in use.
 In multiprogramming, the OS decides which process will get memory when and how much.

 Allocates the memory when a process requests it to do so.


 De-allocates the memory when a process no longer needs it or has been terminated.

Memory Management Unit (MMU)

Hardware device that maps virtual to physical address


In MMU scheme, the value in the relocation register is added to every address generated by a user

process at the time it is sent to memory. The user program deals with logical addresses; it never

sees the Real physical addresses.

MMU uses the following mechanism to convert virtual address to physical address.

 The value in the base register is added to every address generated by a user process, which
is treated as offset at the time it is sent to memory. For example, if the base register value is

10000, then an attempt by the user to use address location 100 will be dynamically
reallocated to location 10100.
Paging

 Logical address space of a process can be non-contiguous; process is allocated physical

memory whenever the latter is available


 Divide physical memory into fixed-sized blocks called frames (size is power of 2, between

512 bytes and 8192 bytes)


 Divide logical memory into blocks of same size called pages.

 Keep track of all free frames


 To run a program of size n pages, need to find n free frames and load program.

 Set up a page table to translate logical to physical addresses.

 Internal fragmentation
Mapping

The operating system takes care of mapping the logical addresses to physical addresses at the
time of memory allocation to the program. The runtime mapping from virtual to physical address

is done by the memory management unit (MMU) which is a hardware device.

Page table

A page table is the data structure used by a virtual memory system in a computer operating
system to store the mapping between virtual addresses and physical addresses. Logical addresses

are generated by the CPU for the pages of the processes therefore they are generally used by the
processes. Physical addresses are the actual frame address of the memory. They are generally used

by the hardware or more specifically by RAM subsystems.


Virtual memory

A computer can address more memory than the amount physically installed on the system. This
extra memory is actually called virtual memory and it is a section of a hard disk that's set up to

emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than physical memory.
Virtual memory serves two purposes. First, it allows us to extend the use of physical memory by

using disk. Second, it allows us to have memory protection, because each virtual address is
translated to a physical address.

Virtual memory is partitioned in to equal size pages. Main memory is also partitions into equal size

page frames.

Virtual memory – Goals

 Allow applications larger than physical memory to execute.

 Run partially loaded programs – Entire program need not to be in memory all the time.
 Degree of Multiprogramming: Many programs simultaneously reside in memory.

 Application Portability: Applications should not have to manage memory resources,


Program should not depend on memory architecture.

 Permit sharing of memory segments or regions. For example, read-only code segments
should be shared between program instances.

Logical Address is divided into,

Page number(p): Number of bits required to represent the pages in Logical Address Space or
Page number

Page offset(d): Number of bits required to represent particular word in a page or page size of

Logical Address Space or word number of a page or page offset.


Physical Address is divided into

Frame number(f): Number of bits required to represent the frame of Physical Address Space or
Frame number.

Frame offset(d): Number of bits required to represent particular word in a frame or frame size of

Physical Address Space or word number of a frame or frame offset.

The following list of formulas is very useful for solving the numerical problems based on
paging.

 Physical Address Space = Size of main memory

 Size of main memory = Total number of frames x Page size


 Frame size = Page size

 If number of frames in main memory = 2X, then number of bits in frame number = X bits
 If Page size = 2X Bytes, then number of bits in page offset = X bits

 If size of main memory = 2X Bytes, then number of bits in physical address = X bits

Note :

In general, if the given address consists of „n‟ bits, then using „n‟ bits, 2n locations are possible.

Then, size of memory = 2n x Size of one location. If the memory is byte-addressable, then size of
one location = 1 byte. Thus, size of memory = 2n bytes.

If the memory is word-addressable where 1 word = m bytes, then size of one location = m bytes.

Thus, size of memory = 2n x m bytes

Q1. Calculate the size of memory if its address consists of 22 bits and the memory is 2-byte
addressable.

Number of locations possible with 22 bits = 222 locations.

It is given that the size of one location = 2 bytes


Thus, Size of memory

= 222 x 2 bytes

= 223 bytes

= 8 MB

Q2. Calculate the number of bits required in the address for memory having size of 16 GB. Assume
the memory is 4-byte addressable.

Let „n‟ number of bits are required. Then, Size of memory = 2n x 4 bytes.

Since, the given memory has size of 16 GB, so we have-

2n x 4 bytes = 16 GB

2n x 4 = 16 GB

2n x 22 = 234

2n = 232

∴ n = 32 bits

Q3. A computer has an 18-bit virtual memory address space where six bits are used for a page

address, calculate the total number of pages defined by the above addressing scheme. Consider
the following virtual memory address – 010111000000111100. What is page and

displacement(offset) of this address.

Total number of pages = 26 = 64

Offset bits = 18-6=12bits

010111 | 000000111100

Page Offset
Q4. In a certain computer, the physical memory has a total capacity of 4GB. The size of a memory
frame is 4 KB. Compute the total number of frames in the physical memory.

4 x 1024 x 1024 /4 frames

220 Frames

Q5. If a computer system is byte addressable and uses 32-bit addresses to access any byte in its
memory. what is the maximum usable size of its memory in Giga Bytes (GB)? Show all your
workings clearly.

Address space = 232

Maximum Usable size of memory = 232 Bytes

=22 x 230 Bytes = 4GB

Q6. A 32-bit computer has a byte addressable main memory. The computer uses 32-bit address to
access any byte in its memory. It is observed that a maximum of 4 GB memory is available for a
process even after the main memory is replaced by an 8 GB memory. Explain, with the calculations,
why this happens.

Address space = 232

Maximum Usable size of memory = 232 Bytes

=22 x 230 Bytes = 4GB

Input and output Device Management

Device driver

Device driver is utility software. The computer communicates with peripheral devices through
device drivers. A driver provides a software interface to hardware devices, enabling operating
systems and other computer programs to access hardware functions without knowing the precise
hardware details. Device drivers depends on both the hardware and the operating system loaded
in to the computer.

Spooling

Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting
data of various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is
accessible to I/O devices. An operating system does the following activities related to distributed
environment:

 Handles I/O device data spooling as devices have different data access rates.

 Maintains the spooling buffer which provides a waiting station where data can rest while
the slower device catches up.

 Maintains parallel computation because of spooling process as a computer can perform I/O
in parallel fashion. It becomes possible to have the computer read data from a tape, write

data to disk and to write out to a tape printer while it is doing its computing task.

Example -

 The most common spooling application is print spooling.

 In print spooling, documents are loaded into a buffer (usually an area on a disk), and then

the printer pulls them off the buffer at its own rate.
 Because the documents are in a buffer where they can be accessed by the printer, you can

perform other operations on the computer while the printing takes place in the
background.

 Spooling also lets you place a number of print jobs on a queue instead of waiting for each
one to finish before specifying the next one

Advantages

 The spooling operation uses a disk as a very large buffer.


 Spooling is capable of overlapping I/O operation for one job with processor operations for
another job.

You might also like