16.1 Operating System OS 2024
16.1 Operating System OS 2024
Objective:
Show understanding of how an OS can maximize the use of resources.
Describe ways in which user interface hides the complexities of hardware from user.
Show understanding of process management.
Concept of multi-tasking and a process.
Process states: running, ready and blocked.
Need for scheduling and function and benefits of different scheduling routines (including
round robin, shortest job first, first come first served, shortest remaining time).
How kernel of OS acts as an interrupt handler and how interrupt handling is used to manage
low level scheduling.
Show understanding of virtual memory, paging and segmentation for memory management.
Concepts of paging, virtual memory and segmentation. Difference between paging and
segmentation. How pages can be replaced. How disk thrashing can occur.
Operating System is program that acts as an interface between user and computer hardware
and controls execution of all kinds of programs.
Aspects relating to use of OS
➢ Computer System needs a program that begins to run when system is first switched
on. At this stage, operating system programs are stored on disk so there is no
operating system. However, computer has stored in ROM a basic input/output system
(BIOS) which starts a Bootstrap Program. It is this bootstrap program that loads
operating system into memory and sets it running.
➢ OS provide facilities to have more than one program stored in memory. Only one
program can access CPU at any given time but others are ready when opportunity
arises, which is known as Multi-Programming and will happen for one single user.
Some systems are designed to have many users simultaneously logged in, which is
described as Time-Sharing System.
Resource Management of CPU involves concept of Scheduling to allow for better utilisation
of CPU Time and Resources.
Regarding input/output operations, operating system will need to deal with
➢ Any I/O operation which has been initiated by computer user.
➢ Any I/O operation which occurs while software is being run and resources, such as
printers or disk drives, are requested.
Direct Data Transfer between Memory and I/O devices using DMA
Direct Memory Access (DMA) controller is needed to allow hardware to access main memory
independently of CPU. DMA frees up CPU to allow it to carry out other tasks while slower I/O
operations are taking place.
Slow speed of I/O compared to a typical CPU clock cycle shows that management of CPU
usage is vital to ensure that CPU does not remain idle while I/O is taking place.
The Kernel
Process Management
Multitasking Multitasking allows computers to carry out more than one task (process)
at a time. Process is a program that has started to be executed. Each of these processes will
share common hardware resources. To ensure multitasking operates correctly, scheduling is
used to decide which processes should be carried out. In Multitasking many processes are
being carried out at same time and ensures best use of computer resources by monitoring
state of each process.
Types of Multitasking Operating Systems
Preemptive Non-Preemptive
Resources are allocated to a process for a Once resources are allocated to process,
limited time. process retains them until it has completed
its burst time (time when a process has
control of CPU) or process has switched to
waiting state.
Process can be interrupted while it is running. Process cannot be interrupted while running;
it must first finish or switch to a waiting state.
This is a more flexible form of scheduling. This is a more rigid form of scheduling
Process Scheduling
Programs that are available to be run on computer system are initially
stored on disk. User could submit a program as a ‘job’ which would include program and
some instructions about how it should be run.
Job scheduler is part of OS which selects processes and moves them from one state to other.
Process States
Process is defined as ‘a program being executed’. Process Control Block (PCB) is
a data structure which contains all of data needed for a process to run. This can be created in
memory when data needs to be received during execution time.
PCB Will Store;
➢ Current Process state (ready, running or blocked).
➢ Process Privileges (such as which resources it is allowed to access)
➢ Register Values (PC, MAR, MDR and ACC).
➢ Process priority and any scheduling information.
➢ Amount of CPU time,process will need to complete.
➢ Process ID which allows it to be uniquely identified.
when a process in running state makes a system call requiring I/O operation and has
to change to waiting state.
➢ Scheduler decides to halt process for any reasons. OS kernel invoke an interrupt-
handling routine. Current values stored in registers must be recorded in process
control block. This allows process to continue execution when it eventually returns to
running state.
Objectives of Scheduling
Scheduling help to keep CPU busy all time to maximize throughput.
to give each process a fair share of CPU time, be fair to all users.
to allow all processes to complete in a reasonable amount of time
to maximize use of peripherals
Scheduling help to prevent deadlock. Resolves situations in which there are conflicts
between two processes requiring CPU at same Time.
to allow multiprogramming
to allow highest priority jobs to be executed first
to service largest possible number of jobs in a given amount of time
to minimize amount of time users must wait for their results.
Scheduling Routine
algorithms
First come first served scheduling (FCFS):
This is non-preemptive algorithm similar to concept of a queue
structure which uses first in first out (FIFO) principle. Data added to a queue first is data
that leaves queue first. Jobs are executed on first come, first serve basis which result in
poor performance as average wait time is high in this routine. FCFS is easy to understand
and implement.
No complex logic as each process request is queued as it is received and executed one by
one. Starvation doesn't occur because every process will eventually get a chance to run.
Round Robin:
A round-robin algorithm allocates a time slice to each process and is
therefore preemptive, because a process will be halted when its time slice has run out. It can
be implemented as a FIFO queue. It normally does not involve prioritising processes.
➢ Each process is served by CPU for a fixed time slice (so all processes are given the
same priority).
➢ Starvation doesn't occur (because for each round robin cycle, every process is given a
fixed time/time slice to execute).
➢ Each process is provided a fix time to execute, it is called a quantum.
Memory Management
Memory manager which is part of operating system determine which processes should be in
Main Memory and where they should be stored. It will determine how memory is allocated
when a number of processes are competing with each other. When a process starts up, it is
allocated memory; when it is completed, OS deallocates memory space.
Single (contiguous) Allocation:
All of memory is made available to a single application. This leads
to inefficient use of main memory.
Methods Use for Partitioning of Main Memory
Paged Memory/Paging
Modern approach is to use paging. Process is divided into equal-sized
pages and memory is divided into frames of Same Size. Secondary storage (Virtual Memory)
can also be divided into frames.
❖ Each process that is executed is divided into blocks of same size to fit as page and page
frames.
❖ Not all of pages of program need to be loaded to start execution.
❖ If an instruction is to be executed which not in page currently loaded, then required
page must be swapped into memory at expense of another page (when main memory
is full).
❖ Each process has a page table that is used to manage the pages of this process
❖ A program’s pages may be scattered throughout available page frames.
❖ OS will manage which page frames are allocated to which pages of a process by using
page table.
When paging is being used, starting situation is that set of pages comprising a process are stored
on disk. One or more of these pages is loaded into memory when process is changing to ready
state. When process is dispatched to running state, process starts executing. At some stage,
process will need access to a page that page table indicates is not in memory. This is called a
page fault condition. In order to bring in required page from secondary storage, a page will
need to be taken out of memory first. This is when a page replacement algorithm is needed.
Page Replacement Algorithms
❖ First in First Out
❖ Least recently used page
❖ Least used page
❖ Longest Resident (max time in memory)
❖ Shortest Resident
Disk Thrashing:
System that are running virtual memory can have a disadvantage of disk
thrashing. Disk thrashing occur when part of a process on one page requires another page
which is on disk. When that page is loaded, it almost immediately requires the original page
again. This can lead to almost never-ending loading and unloading of pages.
Segmentation
An early approach to memory management when different processes were loaded into
memory simultaneously was to partition memory. Aim was to load whole of a process into
one partition. This was wasteful of memory if process size was less than partition size. An
improvement was dynamic partitioning where partition size was allowed to adjust to match
process size.
An extension of this idea which allowed for larger processes to be handled was
segmentation. Segmentation has following characteristic:
❖ Memory is divided into variable length blocks called segments
❖ Jobs or files can consist of many segments.
❖ Index of segments stored which must store base address and length of segment.
In Segmentation a large process is divided into segments for loading into memory but
segments are not constrained to be same size.
In Paging a large process is divided into pages which have to be same size.
***********************
ESQ# 1 Virtual memory, paging and segmentation are used in memory management.
Explain what is meant by virtual memory. P31 Oct 2022 [3]
Ans: Secondary storage Disk is used to extend the RAM so the CPU appears to be able to
access more memory space than the available RAM. Only the data in use needs to be in main
memory so data can be swapped between RAM and virtual memory as necessary. Virtual
memory is created temporarily.
ESQ#2 State one difference between paging and segmentation in way memory is divided.
Paging allows memory to be divided into fixed size blocks and Segmentation divides
memory into variable sized blocks.
Operating system divides memory into pages, compiler is responsible for calculating
segment size.
Access times for paging is faster than for segmentation.
ESQ # 3 : Match The Term with correct Descriptions:
Ans:
**************************************