0% found this document useful (0 votes)
6 views9 pages

System Software (Summarize)

The document outlines how operating systems maximize computer resource utilization through process management, memory management, I/O management, file system management, security, resource allocation, and power management. It discusses the kernel's role in managing hardware resources, the importance of scheduling algorithms for CPU time allocation, and various scheduling techniques like FCFS, SJF, SRTF, and Round Robin. Additionally, it covers interrupt handling, memory management strategies, and the significance of efficient resource sharing and multitasking.

Uploaded by

Ghetto Shoota
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views9 pages

System Software (Summarize)

The document outlines how operating systems maximize computer resource utilization through process management, memory management, I/O management, file system management, security, resource allocation, and power management. It discusses the kernel's role in managing hardware resources, the importance of scheduling algorithms for CPU time allocation, and various scheduling techniques like FCFS, SJF, SRTF, and Round Robin. Additionally, it covers interrupt handling, memory management strategies, and the significance of efficient resource sharing and multitasking.

Uploaded by

Ghetto Shoota
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 9

SYSTEM SOFTWARES

How an operating system can maximize the use of computer resources?

1.Process Management
The OS manages multiple processes by allocating CPU time and resources to each.
Techniques include:
Multitasking: Allows several processes to run concurrently by switching between
them rapidly.
Process Scheduling: Uses algorithms to determine the execution order of processes,
balancing fairness and efficiency.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

2. Memory Management
Efficient memory usage is crucial to maximize performance.
The OS handles:
Virtual Memory: Extends physical memory by using disk space, allowing large
applications to run without requiring a large amount of physical RAM.
Paging: Divides memory into fixed-sized pages, ensuring better use of available
memory.
Segmentation: Divides memory into logical segments, which improves organization and
access speed.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

3. I/O Management
The OS coordinates input/output devices to prevent bottlenecks:
Buffering: Temporarily stores data during I/O operations to reduce the speed
mismatch between devices.
Caching: Keeps frequently used data in faster storage (RAM), minimizing access
time.
Device Drivers: Provides a consistent interface to hardware devices, improving
compatibility and performance.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

4.File System Management


The OS organizes and provides access to data on storage devices:
File Caching: Stores frequently accessed files in memory for faster access.
Disk Scheduling: Optimizes the order of disk read/write operations to reduce seek
time and improve throughput.
File Permissions: Controls access to files, enhancing security and stability.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

5. Security and Protection


Preventing unauthorized access ensures resource integrity and availability:
User Authentication: Ensures only authorized users can access system resources.
Access Control: Protects files and processes from being accessed or modified by
unauthorized entities.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

6. Resource Allocation
Fair Resource Sharing: Ensures that CPU, memory, and I/O resources are allocated
fairly among all processes.
Priority Assignment: Allows critical processes to access resources first, improving
system responsiveness.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

7. Power Management
On modern systems, the OS controls power consumption by:
CPU Throttling: Dynamically adjusting CPU speed based on workload.
Sleep Modes: Reducing power usage when the system is idle.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Direct memory access (DMA) controller


Direct memory access (DMA) is a method that allows an input/output (I/O) device to
send or receive data directly to or from the main memory, bypassing the CPU to
speed up memory operations.
The process is managed by a chip known as a DMA controller (DMAC).
The DMA controller is needed to allow hardware to access the main memory
independently of the CPU.
When the CPU is carrying out a programmed I/O operation, it is fully utilised
during the entire read/write operations; the DMA frees up the CPU to allow it to
carry out other tasks while the slower I/O operations are taking place.
The DMA initiates the data transfers.
The CPU carries out other tasks while this data transfer operation is taking place.
Once the data transfer is complete, an interrupt signal is sent to the CPU from the
DMA

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

How OS hides complexity of the hardware from users


This is be done by:
using GUI interfaces rather than CLI
using device drivers which simplifies the complexity of hardware interfaces
simplifying the saving and retrieving of data from memory and storage devices
carrying out background utilities such as virus scanning which the user can ‘leave
to its own devices’
Modern computers use a drag and drop method, which removes any of the complexities
of interfacing directly with the computer.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

The Kernel
The kernel is the core component of an operating system, responsible for managing
hardware resources and providing a bridge between software applications and
hardware. It operates in a highly privileged mode, known as kernel mode, and
performs critical tasks such as process management, memory management, and I/O
control.
Kernel Operations
System Calls
Applications communicate with the kernel via system calls, such as open(), read(),
and write(). The kernel handles these requests by interacting with hardware or
other system components.
Interrupt Handling
The kernel responds to hardware and software interrupts, ensuring real-time events
are processed promptly. Interrupts allow the system to respond to external events
like key presses or network packets.
Context Switching
The kernel saves the state of a running process and loads the state of another
enabling multitasking. It ensures that each process appears to run continuously,
even though the CPU switches between them rapidly.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Process management
1. Multitasking
Multitasking allows computers to carry out more than one task at a time. Each of
these processes/tasks will share common hardware resources. To ensure multitasking
operates correctly, scheduling is used to decide which processes should be carried
out.(A program that is loaded into memory and is executing is commonly referred to
as a process).
Multitasking ensures the best use of computer resources by monitoring the state of
each process. It should give the appearance that many processes are being carried
out at the same time.
There are two types of multitasking operating systems:
preemptive (processes are pre-empted after each time quantum)
non-preemptive (processes are pre-empted after a fixed time interval)

Difference between preemptive and non-preemptive multitasking:(FORMAT AS 2*5 TABLE)


Preemptive
Resources allocated for a limited time.
Process can be interrupted.
Risk of resource starvation for low-priority tasks.
More flexible scheduling.
Non-Preemptive
Resources retained until completion/wait state.
Process cannot be interrupted.
Long processes may delay shorter tasks.
More rigid scheduling.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Low level scheduling


Low level scheduling decides which process should next get the use of CPU time
Its objectives are to maximise the system throughput, ensure response time is
acceptable and ensure that the system remains stable at all times.
Suppose two apps need to use a printer; the scheduler will use interrupts, buffers
and queues to ensure only one process gets printer access – but it also ensures
that the other process gets a share of the required resources
Throughput is a measure of how many units of information a system can process in a
given amount of time

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Process scheduler(STATE WHAT A PROCESS SCHEDULER IS)


Process priority depends on:
its category (is it a batch, online or real time process?)
whether the process is CPU-bound (for example, a large calculation such as finding
10 000!(10 000 factorial) would need long CPU cycles and short I/O cycles) or I/O
bound (for example, printing a large number of documents would require short CPU
cycles but very long I/O cycles)
resource requirements (which resources does the process require, and how many?)
the turnaround time, waiting time and response time for the process
whether the process can be interrupted during running. Once a task/process has been
given a priority, it can still be affected by the deadline for the completion of
the process
how much CPU time is needed when running the process
the wait time and CPU time
the memory requirements of the process.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Process states
A process control block (PCB) is a data structure which contains all of the data
needed for a process to run; this can be created in memory when data needs to be
received during execution time.
The PCB will store:
current process state (ready, running or blocked)
process privileges (such as which resources it is allowed to access)
register values (PC, MAR, MDR and ACC)
process priority and any scheduling information
the amount of CPU time the process will need to complete
a process ID which allows it to be uniquely identified.
A process state refers to the following three possible conditions:
1.running
2.ready
3.blocked.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

summary of the conditions when changing from one process state to another:(FORMAT
AS 2*5 TABLE)
Process States
1.running state → ready state
2.ready state → running state
3.running state → blocked state
4.blocked state → ready state
Conditions
1.A program is executed during its time slice; when the time slice is completed an
interrupt occurs and the program is moved to the READY queue
2.A process’s turn to use the processor; the OS scheduler allocates CPU time to the
process so that it can be executed
3.The process needs to carry out an I/O operation; the OS scheduler places the
process into the BLOCKED queue
4.The process is waiting for an I/O resource; an I/O operation is ready to be
completed by the process

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

What is a Scheduling Algorithm?


A CPU scheduling algorithm is used to determine which process will use CPU for
execution and which processes to hold or remove from execution. The main goal or
objective of CPU scheduling algorithms in OS is to make sure that the CPU is never
in an idle state, meaning that the OS has at least one of the processes ready for
execution among the available processes in the ready queue
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Preemptive Scheduling Algorithms


In these algorithms, processes are assigned with priority. Whenever a high-priority
process comes in, the lower-priority process that has occupied the CPU is
preempted.
That is, it releases the CPU, and the high-priority process takes the CPU for its
execution.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Non-Preemptive Scheduling Algorithms


In these algorithms, we cannot preempt the process.
That is, once a process is running on the CPU, it will release it either by context
switching or terminating.
Often, these are the types of algorithms that can be used because of the
limitations of the hardware.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Why Do We Need Scheduling?


A process to complete its execution needs both CPU time and I/O time.
In a multiprogramming system, there can be one process using the CPU while another
is waiting for I/O whereas, in a uni programming system, time spent waiting for I/O
is completely wasted as the CPU is idle at this time.
Multiprogramming can be achieved by the use of process scheduling.
The purposes of a scheduling algorithm are as follows:
Maximize the CPU utilization, meaning that keep the CPU as
busy as possible.
Fair allocation of CPU time to every process
Maximize the Throughput
Minimize the turnaround time
Minimize the waiting time
Minimize the response time

There are some important terminologies to know for understanding the scheduling
algorithms:
Arrival Time: This is the time at which a process arrives in the ready queue.
Completion Time: This is the time at which a process completes its execution.
Burst Time: This is the time required by a process for CPU execution.
Turn-Around Time: This is the difference in time between completion time and
arrival time.
This can be calculated as: Turn Around Time = Completion Time –Arrival Time
Waiting Time: This is the difference in time between turnaround time and burst
time. This can be calculated as: Waiting Time = Turn Around Time – Burst Time
Throughput: It is the number of processes that are completing their execution per
unit of time.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Scheduling routine algorithms


First come first served scheduling (FCFS)
Shortest job first scheduling (SJF)
Shortest remaining time first scheduling (SRTF)
Round robin
First come first served scheduling (FCFS)
In this type of scheduling algorithm, the CPU is first allocated to the process
which requests the CPU first. That means the process with minimal arrival time will
be executed first by the CPU. It is a non-preemptive scheduling algorithm as the
priority of processes does not matter, and they are executed in the manner they
arrive in front of the CPU. This scheduling algorithm is implemented with a FIFO
queue. As the process is ready to be executed, its Process Control Block (PCB) is
linked with the tail of this FIFO queue. Now when the CPU becomes free, it is
assigned to the process at the beginning of the queue.

Advantages
Involves no complex logic and just picks processes from the ready queue one by one.
Easy to implement and understand.
Every process will eventually get a chance to run so no starvation occurs.

Disadvantages
Waiting time for processes with less execution time is often very long.
It favors CPU-bound processes then I/O processes.
Leads to convoy effect.
Causes lower device and CPU utilization.
Poor performance as the average wait time is high.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Shortest Job First (SJF) Scheduling Algorithm


Shortest Job First is a non-preemptive scheduling algorithm in which the process
with the shortest burst or completion time is executed first by the CPU. That means
the lesser the execution time, the sooner the process will get the CPU. In this
scheduling algorithm, the arrival time of the processes must be the same, and the
processor must be aware of the burst time of all the processes in advance. If two
processes have the same burst time, then First Come First Serve (FCFS) scheduling
is used to break the tie. The preemptive mode of SJF scheduling is known as the
Shortest Remaining Time First scheduling algorithm.

Advantages
Results in increased Throughput by executing shorter jobs first, which mostly have
a shorter turnaround time.
Gives the minimum average waiting time for a given set of processes.
Best approach to minimize waiting time for other processes awaiting execution.
Useful for batch-type processing where CPU time is known in advance and waiting for
jobs to complete is not critical

Disadvantages
May lead to starvation as if shorter processes keep on coming, then longer
processes will never get a chance to run.
Time taken by a process must be known to the CPU beforehand, which is not always
possible.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Shortest Remaining Time First (SRTF) Scheduling Algorithm


Shortest Remaining Time First (SRTF) scheduling algorithm is basically a preemptive
mode of the Shortest Job First (SJF) algorithm in which jobs are scheduled
according to the shortest remaining time. In this scheduling technique, the process
with the shortest burst time is executed first by the CPU,
but the arrival time of all processes need not be the same. If another process with
the shortest burst time arrives,then the current process will be preempted and a
newer ready job will be executed first

Advantages
Processes are executed faster than SJF, being the preemptive version of it

Disadvantages
Context switching is done a lot more times and adds to the overhead time.
Like SJF, it may still lead to starvation and requires the knowledge of process
time beforehand.
Impossible to implement in interactive systems where the required CPU time is
unknown.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Round Robin Scheduling Algorithm


In this scheduling algorithm, processes are executed cyclically, and each process
is allocated a small amount of time called time slice or time quantum. The ready
queue of the processes is implemented using the circular queue technique in which
the CPU is allocated to each process for the given time quantum and then added back
to the ready queue to wait for its next turn. If the process completes its
execution within the given quantum of time, then it will be preempted, and other
processes will execute for the given period of time. But if the process is not
completely executed within the given time quantum, then it will again be added to
the ready queue and will wait for its turn to complete its execution. This
algorithm is mostly used for multitasking in time sharing systems and operating
systems having multiple clients so that they can make efficient use of resources.

Advantages
All processes are given the same priority; hence all processes get an equal share
of the CPU.
Since it is cyclic in nature, no process is left behind, and starvation doesn't
exist.

Disadvantages
The performance of Throughput depends on the length of the time quantum. Setting it
too short increases the overhead and lowers the CPU efficiency, but if we set it
too long, it gives a poor response to short processes and tends to exhibit the same
behavior as FCFS.
The average waiting time of the Round Robin algorithm is often long.
Context switching is done a lot more times and adds to the overhead time.

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Interrupt handling and OS kernels

The CPU will check for interrupt signals. The system will enter the kernel mode if
any of the
following type of interrupt signals are sent:

Device interrupt (printer out of paper, device not present).

Exceptions (instruction faults such as division by zero, unidentified opcode, stack


fault).

Traps/software interrupt (process requesting a resource such as a disk drive).


When an interrupt is received, the kernel will consult the interrupt dispatch table
(IDT) – this
table links a device description with the appropriate interrupt routine. IDT will
supply the
address of the low level routine to handle the interrupt event received. The kernel
will save the
state of the interrupt process on the kernel stack and the process state will be
restored once
the interrupting task is serviced. Interrupts will be prioritised using interrupt
priority levels (IPL)
(numbered 0 to 31).

A process is suspended only if its interrupt priority level is greater than that of
the current task.
The process with the lower IPL is saved in the interrupt register and is handled
(serviced)
when the IPL value falls to a certain level

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Memory management

As with the storage of data on a hard disk, processes carried out by the CPU may
also become
fragmented. To overcome this problem, memory management will determine which
processes should
be in main memory and where they should be stored (optimization).

When a process starts up, it is allocated memory; when it is completed, the OS


deallocates memory
space. We will now consider the methods by which memory management allocates memory
to
processes/programs and data.

Single (contiguous) allocation

With this method, all of the memory is made available to a single application. This
leads to inefficient
use of main memory.

Paged memory/paging

In paging, the memory is split up into partitions (blocks) of a fixed size.

The physical memory and logical memory are divided up into the same fixed-size
memory blocks.
Physical memory blocks are known as frames and fixed-size logical memory blocks are
known as
pages. A program is allocated a number of pages that is usually just larger than
what is actually
needed. When a process is executed, process pages from logical memory are loaded
into frames in
physical memory. A page table is used; it uses page number as the index. Each
process has its own
separate page table that maps logical addresses to physical addresses

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Segmentation/segmented memory

In segmented memory, logical address space is broken up into variable-size memory


blocks/partitions called segments. Each segment has a name and size. For execution
to take place, segments from logical memory are loaded into physical memory. The
address is specified by the user which contains the segment name and offset value.
The segments are numbered rather than using a name and this segment number is
used as the index in a segment map table

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

Summary of the differences between paging and segmentation(FORMAT AS 2*7 TABLE)

Paging
a page is a fixed-size block of memory
since the block size is fixed, it is possible that all blocks may not be fully used
– this can lead to internal fragmentation increases the risk of external
fragmentation
the user provides a single value – this means that the hardware decides the actual
page size
a page table maps logical addresses to physical addresses
the process of paging is essentially invisible to the user/programmer
procedures and any associated data cannot be separated when using paging

Segmentation
a segment is a variable-size block of memory
because memory blocks are a variable size, this reduces risk of internal
fragmentation but increases the risk of external fragmentation
the user will supply the address in two values (the segment number and the segment
size)
segmentation uses a segment map table containing segment number + offset; it maps
logical addresses to physical addresses
segmentation is essentially a visible process to a user/programmer
procedures and any associated data can be separated when using segmentation

-----------------------------------------------------------------------------------
-------------------------------------------------------------------------

You might also like