0% found this document useful (0 votes)
9 views50 pages

Purpose of An Operating System

The document outlines the purpose and functions of an operating system, including its role in providing user interfaces, resource management, process management, and memory management. It explains how operating systems handle multitasking through scheduling algorithms and manage interrupts for efficient operation. Additionally, it discusses concepts like paging, segmentation, and virtual memory, highlighting their benefits and drawbacks.

Uploaded by

darjinidhish1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views50 pages

Purpose of An Operating System

The document outlines the purpose and functions of an operating system, including its role in providing user interfaces, resource management, process management, and memory management. It explains how operating systems handle multitasking through scheduling algorithms and manage interrupts for efficient operation. Additionally, it discusses concepts like paging, segmentation, and virtual memory, highlighting their benefits and drawbacks.

Uploaded by

darjinidhish1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Purpose Of An

Operating System

COMPUTER SCIENCE 9618 PAPER 3


Purpose Of An Operating System
What is an operating system ?

Applications

Operating System

Hardware Devices ( RAM,


Printer, CPU, Mouse and etc

Provides interface between users and hardware

Describe the ways in which the user interface hides the


complexities of the hardware from the user

The user interface hides the complexities of the computer


hardware and operating system, making it easier for users to
interact.
It provides different access systems to accommodate users with
varying needs, ensuring ease of use.
Complex commands involving memory locations, buses, and
hardware operations are avoided, allowing users to perform tasks
without technical knowledge.

Example
Clicking on an icon instead of writing code simplifies interactions.
Using a graphical user interface (GUI) with icons for navigation
makes computing more user-friendly.
Resource Management
Question : What are resources ?

C.P.U
Memory
Input / Output devices

Resource management focuses on utilizing the resources and


maximize the use of resources
Deals with input/output operation

Direct Memory Access

Memmory

We use DMA controller to give access to memory directly. It allows


the hardware to access the main memory independently of the
CPU.
It frees up the CPU to allow it to carry out the other tasks.

1. DMA initiates the data transfer


2. While CPU carries out other tasks
3. Once the data transfer is complete, an interrupt signal is sent to
the CPU from the DMA.
Kernel

If an application wants to access a hardware component


such as flash light so it first goes to kernel and seek
permission to use it.

Is the part of operating system


Responsible for communicating between hardware, software, and
memory.
Responsible for process management, device management,
memory management.

Question : How the operating system hides the complexities


of the hardware from the user?

Operating system provides interface e.g. GUI which helps


to use the hardwares
Operating System uses device drivers to synchronize the
hardware
Process Management
Program
Process
(Executable File)

Secondary Main
Memory Memory
(HDD) (RAM)

Program is the written code


Process is the executing code.

Multitasking
Multitasking in an operating system allows a user to
perform more than one task at a time.

To ensure multitasking operates correctly, scheduling is used to


decide which process should be carried out.
Multitasking ensures the best use of computer resources by
monitoring each state of process.
It should seem that many processes are executed at the same
time.
In fact, Kernel overlaps the execution of each process based on
scheduling algorithm.
Preemptive: Preempt (to take action, to steal). When CPU is
allocated to a particular process and if at that time a higher
priority process comes, then CPU is allocated to that process.

Nonpreemptive: Does not take any action until the process is


terminated.

Preemptive
Resources are allocated to a process for a limited time.
The process can be interrupted while it is running.
More flexible form of scheduling.

Non Preemptive
Once the resources are allocated to a process, the process retains
them until it has completed its burst time (amount of time
required by a process for executing on CPU).
The process cannot be interrupted while running (It must finish or
switch to waiting state).
More rigid form of scheduling.

Question : Explain why an operating system needs to use


scheduling algorithms ?

To allow multitasking to take place


To ensure fair usage of processor
To ensure fair usage of peripherals
To ensure fair usage of memory
To ensure higher priority tasks are executed sooner
To ensure all processes have the opportunity to finish
To minimize the amount of time users must wait for their
results
To keep CPU busy at all times
To service the largest possible number of jobs in a given
amount of time.
Process State
Running Ready Blocked

Case 1 : No Interrupts / I/O Requests

Case 2 : I/O Requests


Case 3 : Preemptive

State 1 : Ready

Description:

The process is not being executed


The process is in the queue
Waiting for the processor’s attention / time slice.

State 2 : Running

Description:

The process is being executed


The process is currently using its allocated processor time / time
slice.

State 3 : Blocked

Description:

The process is waiting for an event


so it cannot be executed at the moment e.g.input/output.
Conditions For Transition Between State

READY → RUNNING

Processor is available, current process no longer running.


Process was at the head of the ready queue // process has the
highest priority.
OS allocates processor to process so that process can execute.

RUNNING → READY

When process is executing, it is allocated a time slice.


When time slice is completed, interrupt occurs, and process can
no longer use processor even though it is capable of further
processing.

RUNNING → BLOCKED

Process is executing (running state), and when it needs to perform


I/O operation, it is placed in blocked state until I/O operation is
completed.
Question : Explain why a process cannot be moved from the
blocked state to the running state?

When I/O operation is completed for a process in the


blocked state,
The process is transferred to the ready state.
OS decides which process to allocate to the processor.

Question : Explain why a process cannot move directly from


the ready state to the blocked state?

To be in the blocked state, the process must initiate


some I/O operation.
To initiate the operation, the process must be executing.
If the process is in the ready state, it cannot be
executing.

Scheduler
High Level Scheduler: Decides which processes are to be loaded
from backing store into ready queue.

Low Level Scheduler: Decides which of the processes in ready


state should get use of processor. Or which process is put into
running queue based on position or priority.
Scheduling Routine Algorithms
First come first served scheduling
Shortest job first scheduling
Shortest remaining time first scheduling
Round Robin

First Come First Served Scheduling


Non-preemptive
Based on arrival time
Uses first-in first-out principle (FIFO)

So the Queue will be


Shortest Job First Scheduling
Non-preemptive
Based on arrival time
Uses first-in first-out principle (FIFO)

with SJB, the process requiring the least CPU time is executed first

So the Queue will be

Shortest Remaining Time First


Preemptive
The processes are placed in ready queue as they arrive
But when a process with the shortest burst time arrives
the existing process is removed
The shorter process is then executed first
Round Robin
Preemptive
A fixed time slice is given to each process; this is known as time
quantum.
The running queue is worked out by giving each process its time
slice in the correct order. If a process completes before the end of
its time slice, the next process is brought into the ready queue for
its time slice.
Time Quantum = 5ms

Ready

Running

Question : Explain the need for scheduling in process


management ?

Process scheduling allows more than one task to be


executed at the same time which enables multi-tasking
To allow high-priority jobs to be completed first.
To keep the CPU busy all the time.
To ensure that all processes execute efficiently.
To have reduced wait times for all processes and to
ensure all processes have fair access to the CPU
Description And Benefits And Drawbacks

First Come First Served (FCFS) Scheduling

Description:
FCFS scheduling processes requests in the order they
arrive. Each process is queued as it is received and
executed one by one, without preemption.
It follows a non-preemptive approach, meaning once a
process starts execution, it runs until completion.

Benefits:
Simple and easy to implement as it does not require
complex scheduling logic.
Prevents starvation, ensuring every process will
eventually get a chance to run.
Low overhead as there is no need for frequent context
switching.

Drawbacks:
Poor response time for long processes, as shorter jobs
must wait for long ones to finish.
Can lead to convoy effect, where short processes are
stuck waiting behind long processes.
Not suitable for time-sharing systems, as it does not
allow preemption.
Shortest Job First (SJF) Scheduling

Description:
SJF scheduling selects the process with the shortest CPU
burst time and executes it first.
It is non-preemptive, meaning once a process starts, it
runs to completion before the next shortest process is
selected.

Benefits:
Increases throughput, as shorter processes finish
quickly, allowing more processes to be executed in less
time.
Minimizes average waiting time, as shorter processes do
not have to wait for long-running processes.

Drawbacks:
Can cause starvation, as long-running processes may
never get CPU time if short jobs keep arriving.
Difficult to implement, as it requires knowing the exact
burst time of processes beforehand.
Not ideal for dynamic environments, where process
execution times may change.
Shortest Remaining Time First (SRTF) Scheduling

Description:
Preemptive
The processes are placed in ready queue as they arrive
But when a process with the shortest burst time arrives
the existing process is removed
The shorter process is then executed first

Benefits:
More responsive than SJF, as shorter jobs can interrupt
longer ones.
Minimizes average turnaround time by prioritizing short
tasks.
Efficient CPU utilization, reducing idle time.

Drawbacks:
Can cause high overhead, as frequent context switching
occurs when new shorter processes arrive.
Starvation of long processes, as they may keep getting
preempted by shorter jobs.
Requires accurate estimation of remaining CPU time,
which is difficult in real-world scenarios.
Round Robin (RR) Scheduling

Description:
RR assigns a fixed time slice (quantum) to each process in
a cyclic order.
If a process does not finish within its time slice, it is
preempted and moved to the back of the queue.

Benefits:
Ensures fairness, as every process gets CPU time equally.
Prevents starvation, since all processes receive
execution time in each cycle.
Better responsiveness, making it suitable for time-
sharing systems and multi-user environments.

Drawbacks:
High context switching overhead, especially if the time
slice is too small.
Inefficient for long processes, as they require multiple
cycles to complete.
Performance depends on time quantum selection – too
small leads to excessive switching, too large behaves like
FCFS.
Interrupt Handling

User Mode And Kernel Mode

Processor Switches Between


User Mode And Kernel Mode

Interrupt is a kind of signal to OS from the device which is


connected to computer. Sometimes interrupts are within the
computer.
The processor will check for interrupt signals and will switch
to kernel mode if any of the following type of interrupt
signals are sent:

Device Interrupt (Printer out of paper)


Exception (Instruction faults such as division by zero)
Traps / Software Interrupts (Process requesting a
resource)

Interrupt Dispatch Table: to determine the current response


to interrupt.

Interrupt Priority Level: numbered (0 - 31)

When an interrupt is received, other interrupts are


disabled so that the process that deals with the interrupt
cannot itself be interrupted.
The state of the current task/process is saved on the
kernel stack.
The system now jumps to the interrupt service routine
(using the IDT).
Once completed, the state of the interrupted process is
restored using the values stored on the kernel stack, the
process then continues.
After an interrupt has been handled, the interrupt needs
to be restored so that any further interrupts can be dealt
with.
Question : How does the kernel of the OS act as an interrupt
handler?

When an interrupt is received, other interrupts are


disabled to ensure that the process handling the
interrupt cannot be interrupted itself.
The state of the current task or process is saved on the
kernel stack to preserve its progress.
The system jumps to the appropriate Interrupt Service
Routine (ISR) by looking it up in the Interrupt Descriptor
Table (IDT).
The ISR is executed, handling the specific interrupt event
(e.g., hardware or software request).
Once the interrupt is handled, the state of the previously
interrupted process is restored from the kernel stack,
allowing it to continue its operation.
After the interrupt has been fully handled, interrupts are
re-enabled so the system can respond to future
interrupts.

Question : How is interrupt handling used to manage low-


level scheduling?
The system uses regular timer interrupts to manage how
long each process can run, which helps with time-sharing
in multitasking systems.
When a timer interrupt occurs, the kernel checks if a
higher-priority task is waiting to run and can preempt the
current task.
If needed, the kernel saves the current process's state
and switches to a new process (context switch), enabling
efficient process scheduling.
Interrupts allow the system to quickly respond to real-
time events (like network requests or hardware I/O),
which helps manage process priorities and scheduling in
a dynamic environment.
Memory Management

Concept Of Paging

Rest of pages which are not present in main memory they are
stored in Hard drive.
Page Replacement
Page replacement occurs when a requested page is not in
memory (flag = 0). When paging in/out from memory, it is
necessary to consider how the computer can decide which
page(s) to replace to allow the requested page to be loaded.
When a new page is requested but it is not in memory a page
fault occurs.

Page Replacement Algorithms


(1) First In First Out

(2) Optimal Page replacement: looks forward in line to see which


frame it can replace in the event of page fault.

(3) Longest resident: A particular page which is present for the


longest time is swapped. (Time of entry should be present in Page
Table)

(4) Least used: A particular page which is used less is swapped.


(Number of time the page has been accessed should be present in
Page Table)

Question : Explain why the algorithms (Longest resident /


Least used) may not be the best choice for efficient memory
management

Longest Resident: page in for lengthy period of time may


be accessed often ( so not good candidate for being
removed. )
Least Used: A page just entered has a low least value so
likely to be a candidate for immediately being swapped
out
Segmentation
Process Segment Table Main Memory

Internal Fragmentation Vs External Fragmentation


Paging Segmentation

Internal Fragmentation: Wasted memory inside allocated


blocks because a program doesn't use all the space it was
given.
External Fragmentation: Free memory is scattered in
small, non-contiguous blocks, making it hard to allocate
large chunks of memory.
Difference Between Paging And Segmentation

Paging Segmentation

A page is a fixed-size A segment is a


block of memory. variable-size block
of memory.

Since the block size Memory blocks are


is fixed, it is possible variable-size, this
that all blocks may increases the risk of
not be fully used. external
This can lead to fragmentation.
internal
fragmentation.

The operating The compiler is


System divides the responsible for
memory into pages calculating the
segment size

Procedures Procedures can be


(Modules) cannot be separated when
separated when using segmentation.
using paging.

Access Time Faster Access Time slower


than segmentation than paging
Virtual Memory

Data is swapped when needed

Question : Describe what is meant by virtual Memory. ?

Secondary storage is used to extend ram


So CPU can access more memory space than available
ram
Only part of program / data in use needs to be in RAM
Data is swapped between RAM and disk.
Virtual memory is created temporarily

Question : Explain how paging is used to manage virtual


memory?
Divides memory RAM into frames
Divides virtual memory into blocks of the same size
called pages
Frames/pages are a fixed size
Sets up a page table to translate logical addresses to
physical addresses
Keeps track of all free frames
Swaps pages in memory with new pages from disk when
needed
Question : One drawback of using virtual memory is disk
thrashing.
Describe what is meant by the term disk thrashing?

Pages are required back in RAM as soon as they are


moved to disk
There is continuous swapping (of the same pages)
No useful processing happens
Because pages that are in RAM and on disk are inter-
dependent
Nearly all processing time is used for swapping pages.

Question : Explain the circumstances in which disk thrashing


could occur?

Disk thrashing is a problem that may occur when


frequent transfers between main memory and secondary
memory take place.
Disk thrashing is a problem that may occur when virtual
memory is being used.
As main memory fills up, more pages need to be swapped
in and out of virtual memory.
This swapping leads to a very high rate of hard disk head
movements.
Eventually, more time is spent swapping the pages than
processing the data.
Purpose Of An Operating System

Question 1

Sir Taha Ali Papersdock +92 318 2248934


Question 2

Sir Taha Ali Papersdock +92 318 2248934


Question 3

Sir Taha Ali Papersdock +92 318 2248934


Question 4

Sir Taha Ali Papersdock +92 318 2248934


Sir Taha Ali Papersdock +92 318 2248934
Question 5

Sir Taha Ali Papersdock +92 318 2248934


Sir Taha Ali Papersdock +92 318 2248934
Question 6

Sir Taha Ali Papersdock +92 318 2248934


Sir Taha Ali Papersdock +92 318 2248934
Question 7

Sir Taha Ali Papersdock +92 318 2248934


Question 8

Sir Taha Ali Papersdock +92 318 2248934


Question 9

Sir Taha Ali Papersdock +92 318 2248934


Question 10

Sir Taha Ali Papersdock +92 318 2248934


Question 11

Sir Taha Ali Papersdock +92 318 2248934


Question 12

Sir Taha Ali Papersdock +92 318 2248934


Answers
Answer 1

Sir Taha Ali Papersdock +92 318 2248934


Answer 2

Answer 3

Sir Taha Ali Papersdock +92 318 2248934


Answer 4

Answer 5

Sir Taha Ali Papersdock +92 318 2248934


Answer 6

Sir Taha Ali Papersdock +92 318 2248934


Answer 7

Sir Taha Ali Papersdock +92 318 2248934


Answer 8

Sir Taha Ali Papersdock +92 318 2248934


Answer 9

Answer 10

Sir Taha Ali Papersdock +92 318 2248934


Answer 11

Answer 12

Sir Taha Ali Papersdock +92 318 2248934

You might also like