0% found this document useful (0 votes)
31 views10 pages

16.1 Operating System OS 2024

The document provides an overview of operating systems (OS), detailing their role in resource management, process management, and memory management. It explains concepts such as multitasking, process states, scheduling algorithms, and memory allocation techniques like paging and segmentation. Additionally, it discusses the kernel's functions, the importance of interrupts, and the challenges associated with disk thrashing in virtual memory systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views10 pages

16.1 Operating System OS 2024

The document provides an overview of operating systems (OS), detailing their role in resource management, process management, and memory management. It explains concepts such as multitasking, process states, scheduling algorithms, and memory allocation techniques like paging and segmentation. Additionally, it discusses the kernel's functions, the importance of interrupts, and the challenges associated with disk thrashing in virtual memory systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

16.

1 Operating System (OS) Notes

Objective:
 Show understanding of how an OS can maximize the use of resources.
 Describe ways in which user interface hides the complexities of hardware from user.
 Show understanding of process management.
Concept of multi-tasking and a process.
Process states: running, ready and blocked.
Need for scheduling and function and benefits of different scheduling routines (including
round robin, shortest job first, first come first served, shortest remaining time).
How kernel of OS acts as an interrupt handler and how interrupt handling is used to manage
low level scheduling.
 Show understanding of virtual memory, paging and segmentation for memory management.
Concepts of paging, virtual memory and segmentation. Difference between paging and
segmentation. How pages can be replaced. How disk thrashing can occur.
Operating System is program that acts as an interface between user and computer hardware
and controls execution of all kinds of programs.
Aspects relating to use of OS
➢ Computer System needs a program that begins to run when system is first switched
on. At this stage, operating system programs are stored on disk so there is no
operating system. However, computer has stored in ROM a basic input/output system
(BIOS) which starts a Bootstrap Program. It is this bootstrap program that loads
operating system into memory and sets it running.
➢ OS provide facilities to have more than one program stored in memory. Only one
program can access CPU at any given time but others are ready when opportunity
arises, which is known as Multi-Programming and will happen for one single user.
Some systems are designed to have many users simultaneously logged in, which is
described as Time-Sharing System.

Resource Utilization One operating system task is to maximise utilisation of computer


resources. Resource management can be split into three areas.
1) CPU 2) Memory 3) Input/output (I/O) system.

Resource Management of CPU involves concept of Scheduling to allow for better utilisation
of CPU Time and Resources.
Regarding input/output operations, operating system will need to deal with
➢ Any I/O operation which has been initiated by computer user.
➢ Any I/O operation which occurs while software is being run and resources, such as
printers or disk drives, are requested.

03345606716 emkonweb.com @emkonweb @emkonweb


CS Made Easy

Direct Data Transfer between Memory and I/O devices using DMA
Direct Memory Access (DMA) controller is needed to allow hardware to access main memory
independently of CPU. DMA frees up CPU to allow it to carry out other tasks while slower I/O
operations are taking place.
Slow speed of I/O compared to a typical CPU clock cycle shows that management of CPU
usage is vital to ensure that CPU does not remain idle while I/O is taking place.

The Kernel

Kernel is part of operating system. It is central component responsible for communication


between hardware, software and memory. It is responsible for process management, device
management, memory management, interrupt handling and input/output file operation.
ESQ: How operating system hide complexities of hardware from the users.
Ans: Using GUI interfaces rather than CLI. Using device drivers which simplifies
complexity of hardware interfaces. Simplifying saving and retrieving of data from memory
and storage devices. Carrying out background utilities, such as virus scanning.

Two important management tasks of operating systems are:


1. Processor Management – how to allocate time to different processes
2. Memory Management – which part of process should be in memory

Process Management
Multitasking Multitasking allows computers to carry out more than one task (process)
at a time. Process is a program that has started to be executed. Each of these processes will
share common hardware resources. To ensure multitasking operates correctly, scheduling is

Computer Science IGCSE, O & A level By Engr M Kashif 03345606716


16.1 Operating System (OS) Notes

used to decide which processes should be carried out. In Multitasking many processes are
being carried out at same time and ensures best use of computer resources by monitoring
state of each process.
Types of Multitasking Operating Systems
Preemptive Non-Preemptive
Resources are allocated to a process for a Once resources are allocated to process,
limited time. process retains them until it has completed
its burst time (time when a process has
control of CPU) or process has switched to
waiting state.
Process can be interrupted while it is running. Process cannot be interrupted while running;
it must first finish or switch to a waiting state.
This is a more flexible form of scheduling. This is a more rigid form of scheduling

Process Scheduling
Programs that are available to be run on computer system are initially
stored on disk. User could submit a program as a ‘job’ which would include program and
some instructions about how it should be run.

Job scheduler is part of OS which selects processes and moves them from one state to other.

long-term or high-level scheduler program controls selection of a program stored on


disk to be moved into main memory. Its task is to decide which new processes are to
be loaded from backing store into ready state.
Medium-term scheduler is in-charge of handling swapped out-processes. A running
process may become suspended if it makes an I/O request. E.g A program has to be
taken back to disk due to memory getting overcrowded. This is controlled by a
medium-term scheduler.
Short-term or low-level scheduler : Its role is to decide which of processes in ready
state should get use of processor means to be put in running state based on priority
invoked after interrupt/OS call. Low level scheduling resolves situations in which there
are conflicts between two processes requiring same resource.
Process Priority Depends On;
 Its category (is it a batch, online or real time process?)
 Whether process is CPU-bound (e.g a large calculation would need long CPU
cycles and short I/O cycles) or I/O bound (e.g, Printing a large number of
documents would require short CPU cycles but very long I/O cycles)
 Resource requirements (which resources does process require, and how
many?)
 Whether process can be interrupted during running.

03345606716 emkonweb.com @emkonweb @emkonweb


CS Made Easy

Pre-emptive Scheduling: A process currently in running state may be interrupted and


moved to ready state for overall better service of system.
Non Pre-emptive Scheduling: Once a process is in running state, it will continue to run
until it terminates or blocks itself for I/O.

Process States
Process is defined as ‘a program being executed’. Process Control Block (PCB) is
a data structure which contains all of data needed for a process to run. This can be created in
memory when data needs to be received during execution time.
PCB Will Store;
➢ Current Process state (ready, running or blocked).
➢ Process Privileges (such as which resources it is allowed to access)
➢ Register Values (PC, MAR, MDR and ACC).
➢ Process priority and any scheduling information.
➢ Amount of CPU time,process will need to complete.
➢ Process ID which allows it to be uniquely identified.

Process Can be one of three State ;


➢ Running State:
✓ Process is being executed by the processor.
✓ Process is currently using its allocated processor time / time slice.
✓ Only one process can be in running state at a time.
✓ When a process completed its time slice it is shifted from running to ready
state.
✓ If a process is in running state and it requires input / output, running process is
halted and moves to blocked state.
➢ Ready State:
✓ Processes are in queue waiting for processor’s attention.
✓ Processes are not being executed but as soon as they get turn a process will be
shifted to running state.
✓ New process always comes into ready state.
➢ Blocked / Waiting State.
✓ Process is waiting for an event so it cannot be executed at moment e.g.
input/output.
✓ As soon a process in blocked state completes its input / output operation it is
shifted to ready state.

Computer Science IGCSE, O & A level By Engr M Kashif 03345606716


16.1 Operating System (OS) Notes

Transitions between States:


➢ A new process arrives in memory and a PCB is created; it changes to ready state.
➢ Process in ready state is given access to CPU by dispatcher; it changes to running state
➢ A process in running state is halted by an interrupt; it returns to ready state.
➢ Process in running state cannot progress until some event has occurred (I/O perhaps);
it changes to waiting state (sometimes called the ‘suspended’ or ‘blocked’ state)
➢ Process in Waiting State is notified that event is completed; it returns to Ready State.
➢ Process in Running State completes execution; it changes to Terminated State.

Classification of process or jobs


1. Input output bound Jobs need little processing but do need to use peripheral
devices considerably.
2. Processor bound jobs (CPU bound Jobs) need a considerable time of processor time
and little use of peripheral devices.
Interrupts
Interrupt is a signal sent to processor by Hardware or Software indicating that they
require processor attention.
Types of Interrupts
1) Hardware Generated Interrupts :• Printer to inform processor that it is out of paper or
paper jam • Reset button pressed by user • Keyboard to indicate that data has been entered
and requires saving • Mouse click to refresh current screen
2) Software or Program Interrupts • Division by zero • A file not being found.
3) Clock Interrupt – generated by internal clock (scheduling)
4) I / O Interrupts – Generated by input output devices .
Two Main Reasons for interrupts.
➢ Processes consist of alternating periods of CPU usage and I/O usage. I/O takes far too
long for CPU to remain idle waiting for it to complete. Interrupt mechanism is used

03345606716 emkonweb.com @emkonweb @emkonweb


CS Made Easy

when a process in running state makes a system call requiring I/O operation and has
to change to waiting state.
➢ Scheduler decides to halt process for any reasons. OS kernel invoke an interrupt-
handling routine. Current values stored in registers must be recorded in process
control block. This allows process to continue execution when it eventually returns to
running state.
Objectives of Scheduling
Scheduling help to keep CPU busy all time to maximize throughput.
to give each process a fair share of CPU time, be fair to all users.
to allow all processes to complete in a reasonable amount of time
to maximize use of peripherals
Scheduling help to prevent deadlock. Resolves situations in which there are conflicts
between two processes requiring CPU at same Time.
to allow multiprogramming
to allow highest priority jobs to be executed first
to service largest possible number of jobs in a given amount of time
to minimize amount of time users must wait for their results.

Scheduling Routine
algorithms
 First come first served scheduling (FCFS):
This is non-preemptive algorithm similar to concept of a queue
structure which uses first in first out (FIFO) principle. Data added to a queue first is data
that leaves queue first. Jobs are executed on first come, first serve basis which result in
poor performance as average wait time is high in this routine. FCFS is easy to understand
and implement.
No complex logic as each process request is queued as it is received and executed one by
one. Starvation doesn't occur because every process will eventually get a chance to run.

 Shortest Job First Scheduling (SJF)


These are best approaches to minimise process waiting times. SJF
is non-preemptive. For SJF Burst time of a process should be known in advance.
Process are executed in ascending order of amount of CPU time required. Short processes
are executed first and followed by longer processes which leads to an increased throughput
because more processes can be executed in a smaller amount of time.
✓ Easy to implement in Batch systems where required CPU time is known in advance.

Computer Science IGCSE, O & A level By Engr M Kashif 03345606716


16.1 Operating System (OS) Notes

✓ Impossible to implement in interactive systems where required CPU time is not


known.
 Shortest Remaining Time First Scheduling (SRTF)
With SRTF, processes are placed in the ready queue as they arrive; but when a
process with a shorter burst time arrives, existing process is removed (pre-empted) from
execution. The shorter process is then executed first. SRTF is preemptive.
SRTF is impossible to implement in interactive systems where required CPU time is not
known. It is often used in batch environments where short jobs need to give preference.

 Round Robin:
A round-robin algorithm allocates a time slice to each process and is
therefore preemptive, because a process will be halted when its time slice has run out. It can
be implemented as a FIFO queue. It normally does not involve prioritising processes.
➢ Each process is served by CPU for a fixed time slice (so all processes are given the
same priority).
➢ Starvation doesn't occur (because for each round robin cycle, every process is given a
fixed time/time slice to execute).
➢ Each process is provided a fix time to execute, it is called a quantum.

Memory Management

Memory manager which is part of operating system determine which processes should be in
Main Memory and where they should be stored. It will determine how memory is allocated
when a number of processes are competing with each other. When a process starts up, it is
allocated memory; when it is completed, OS deallocates memory space.
 Single (contiguous) Allocation:
All of memory is made available to a single application. This leads
to inefficient use of main memory.
Methods Use for Partitioning of Main Memory

 Paged Memory/Paging
Modern approach is to use paging. Process is divided into equal-sized
pages and memory is divided into frames of Same Size. Secondary storage (Virtual Memory)
can also be divided into frames.

❖ Main memory is divided into equal-size blocks, called page frames


❖ Divide virtual memory (part of hard disk) into blocks of same size called pages.

03345606716 emkonweb.com @emkonweb @emkonweb


CS Made Easy

❖ Each process that is executed is divided into blocks of same size to fit as page and page
frames.
❖ Not all of pages of program need to be loaded to start execution.
❖ If an instruction is to be executed which not in page currently loaded, then required
page must be swapped into memory at expense of another page (when main memory
is full).
❖ Each process has a page table that is used to manage the pages of this process
❖ A program’s pages may be scattered throughout available page frames.
❖ OS will manage which page frames are allocated to which pages of a process by using
page table.

Page table will have following information:


➢ Page Number – Page stored in secondary memory is given some number
➢ Presence flag – Whether page is in memory or not
➢ Page frame address – in memory start address of page frame
➢ Time of entry – when page frame stored / swapped in main memory
➢ No. of times page accessed – how many time it is accessed

When paging is being used, starting situation is that set of pages comprising a process are stored
on disk. One or more of these pages is loaded into memory when process is changing to ready
state. When process is dispatched to running state, process starts executing. At some stage,
process will need access to a page that page table indicates is not in memory. This is called a
page fault condition. In order to bring in required page from secondary storage, a page will
need to be taken out of memory first. This is when a page replacement algorithm is needed.
Page Replacement Algorithms
❖ First in First Out
❖ Least recently used page
❖ Least used page
❖ Longest Resident (max time in memory)
❖ Shortest Resident

Disk Thrashing:
System that are running virtual memory can have a disadvantage of disk
thrashing. Disk thrashing occur when part of a process on one page requires another page
which is on disk. When that page is loaded, it almost immediately requires the original page
again. This can lead to almost never-ending loading and unloading of pages.

Computer Science IGCSE, O & A level By Engr M Kashif 03345606716


16.1 Operating System (OS) Notes

Segmentation
An early approach to memory management when different processes were loaded into
memory simultaneously was to partition memory. Aim was to load whole of a process into
one partition. This was wasteful of memory if process size was less than partition size. An
improvement was dynamic partitioning where partition size was allowed to adjust to match
process size.

An extension of this idea which allowed for larger processes to be handled was
segmentation. Segmentation has following characteristic:
❖ Memory is divided into variable length blocks called segments
❖ Jobs or files can consist of many segments.
❖ Index of segments stored which must store base address and length of segment.

Two factors that limited efficiency of Segmentation.


➢ First was that the segments were not constrained to be same size.
➢ Second was that size of process did not allow all of segments for one process to be in
memory at same time. Segments had to be moved from disk to memory but then back
again to disk when a different segment was needed in memory.
Above two factors combined to cause fragmentation both of memory and of disk storage.
This resulted in degradation in performance of system.
Differences Between Paging And Segmentation
Paging Segmentation
A page is a fixed-size block of memory. A segment is a variable-size block of memory
Since block size is fixed, it is possible that all Because memory blocks are a variable size,
blocks may not be fully used – this can lead this reduces risk of internal fragmentation
to internal fragmentation. but increases risk of external fragmentation.
User provides a single value – this means that User will supply address in two values (the
hardware decides actual page size. segment number and the segment size)
A page table maps logical addresses to Segmentation uses a segment map table
physical addresses (this contains base containing segment number and offset
address of each page stored in frames in (segment size); it maps logical addresses to
physical memory space) physical addresses
Process of paging is essentially invisible to Segmentation is essentially a visible process
user/programmer to a user/programmer

In Segmentation a large process is divided into segments for loading into memory but
segments are not constrained to be same size.
In Paging a large process is divided into pages which have to be same size.

***********************

03345606716 emkonweb.com @emkonweb @emkonweb


CS Made Easy

EXAM STYLE QUESTIONS

ESQ# 1 Virtual memory, paging and segmentation are used in memory management.
Explain what is meant by virtual memory. P31 Oct 2022 [3]
Ans: Secondary storage Disk is used to extend the RAM so the CPU appears to be able to
access more memory space than the available RAM. Only the data in use needs to be in main
memory so data can be swapped between RAM and virtual memory as necessary. Virtual
memory is created temporarily.

ESQ#2 State one difference between paging and segmentation in way memory is divided.
Paging allows memory to be divided into fixed size blocks and Segmentation divides
memory into variable sized blocks.
Operating system divides memory into pages, compiler is responsible for calculating
segment size.
Access times for paging is faster than for segmentation.
ESQ # 3 : Match The Term with correct Descriptions:
Ans:

**************************************

Computer Science IGCSE, O & A level By Engr M Kashif 03345606716

You might also like