System Software (Summarize)
System Software (Summarize)
1.Process Management
The OS manages multiple processes by allocating CPU time and resources to each.
Techniques include:
Multitasking: Allows several processes to run concurrently by switching between
them rapidly.
Process Scheduling: Uses algorithms to determine the execution order of processes,
balancing fairness and efficiency.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
2. Memory Management
Efficient memory usage is crucial to maximize performance.
The OS handles:
Virtual Memory: Extends physical memory by using disk space, allowing large
applications to run without requiring a large amount of physical RAM.
Paging: Divides memory into fixed-sized pages, ensuring better use of available
memory.
Segmentation: Divides memory into logical segments, which improves organization and
access speed.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
3. I/O Management
The OS coordinates input/output devices to prevent bottlenecks:
Buffering: Temporarily stores data during I/O operations to reduce the speed
mismatch between devices.
Caching: Keeps frequently used data in faster storage (RAM), minimizing access
time.
Device Drivers: Provides a consistent interface to hardware devices, improving
compatibility and performance.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
6. Resource Allocation
Fair Resource Sharing: Ensures that CPU, memory, and I/O resources are allocated
fairly among all processes.
Priority Assignment: Allows critical processes to access resources first, improving
system responsiveness.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
7. Power Management
On modern systems, the OS controls power consumption by:
CPU Throttling: Dynamically adjusting CPU speed based on workload.
Sleep Modes: Reducing power usage when the system is idle.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
The Kernel
The kernel is the core component of an operating system, responsible for managing
hardware resources and providing a bridge between software applications and
hardware. It operates in a highly privileged mode, known as kernel mode, and
performs critical tasks such as process management, memory management, and I/O
control.
Kernel Operations
System Calls
Applications communicate with the kernel via system calls, such as open(), read(),
and write(). The kernel handles these requests by interacting with hardware or
other system components.
Interrupt Handling
The kernel responds to hardware and software interrupts, ensuring real-time events
are processed promptly. Interrupts allow the system to respond to external events
like key presses or network packets.
Context Switching
The kernel saves the state of a running process and loads the state of another
enabling multitasking. It ensures that each process appears to run continuously,
even though the CPU switches between them rapidly.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Process management
1. Multitasking
Multitasking allows computers to carry out more than one task at a time. Each of
these processes/tasks will share common hardware resources. To ensure multitasking
operates correctly, scheduling is used to decide which processes should be carried
out.(A program that is loaded into memory and is executing is commonly referred to
as a process).
Multitasking ensures the best use of computer resources by monitoring the state of
each process. It should give the appearance that many processes are being carried
out at the same time.
There are two types of multitasking operating systems:
preemptive (processes are pre-empted after each time quantum)
non-preemptive (processes are pre-empted after a fixed time interval)
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Process states
A process control block (PCB) is a data structure which contains all of the data
needed for a process to run; this can be created in memory when data needs to be
received during execution time.
The PCB will store:
current process state (ready, running or blocked)
process privileges (such as which resources it is allowed to access)
register values (PC, MAR, MDR and ACC)
process priority and any scheduling information
the amount of CPU time the process will need to complete
a process ID which allows it to be uniquely identified.
A process state refers to the following three possible conditions:
1.running
2.ready
3.blocked.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
summary of the conditions when changing from one process state to another:(FORMAT
AS 2*5 TABLE)
Process States
1.running state → ready state
2.ready state → running state
3.running state → blocked state
4.blocked state → ready state
Conditions
1.A program is executed during its time slice; when the time slice is completed an
interrupt occurs and the program is moved to the READY queue
2.A process’s turn to use the processor; the OS scheduler allocates CPU time to the
process so that it can be executed
3.The process needs to carry out an I/O operation; the OS scheduler places the
process into the BLOCKED queue
4.The process is waiting for an I/O resource; an I/O operation is ready to be
completed by the process
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
There are some important terminologies to know for understanding the scheduling
algorithms:
Arrival Time: This is the time at which a process arrives in the ready queue.
Completion Time: This is the time at which a process completes its execution.
Burst Time: This is the time required by a process for CPU execution.
Turn-Around Time: This is the difference in time between completion time and
arrival time.
This can be calculated as: Turn Around Time = Completion Time –Arrival Time
Waiting Time: This is the difference in time between turnaround time and burst
time. This can be calculated as: Waiting Time = Turn Around Time – Burst Time
Throughput: It is the number of processes that are completing their execution per
unit of time.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Advantages
Involves no complex logic and just picks processes from the ready queue one by one.
Easy to implement and understand.
Every process will eventually get a chance to run so no starvation occurs.
Disadvantages
Waiting time for processes with less execution time is often very long.
It favors CPU-bound processes then I/O processes.
Leads to convoy effect.
Causes lower device and CPU utilization.
Poor performance as the average wait time is high.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Advantages
Results in increased Throughput by executing shorter jobs first, which mostly have
a shorter turnaround time.
Gives the minimum average waiting time for a given set of processes.
Best approach to minimize waiting time for other processes awaiting execution.
Useful for batch-type processing where CPU time is known in advance and waiting for
jobs to complete is not critical
Disadvantages
May lead to starvation as if shorter processes keep on coming, then longer
processes will never get a chance to run.
Time taken by a process must be known to the CPU beforehand, which is not always
possible.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Advantages
Processes are executed faster than SJF, being the preemptive version of it
Disadvantages
Context switching is done a lot more times and adds to the overhead time.
Like SJF, it may still lead to starvation and requires the knowledge of process
time beforehand.
Impossible to implement in interactive systems where the required CPU time is
unknown.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Advantages
All processes are given the same priority; hence all processes get an equal share
of the CPU.
Since it is cyclic in nature, no process is left behind, and starvation doesn't
exist.
Disadvantages
The performance of Throughput depends on the length of the time quantum. Setting it
too short increases the overhead and lowers the CPU efficiency, but if we set it
too long, it gives a poor response to short processes and tends to exhibit the same
behavior as FCFS.
The average waiting time of the Round Robin algorithm is often long.
Context switching is done a lot more times and adds to the overhead time.
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
The CPU will check for interrupt signals. The system will enter the kernel mode if
any of the
following type of interrupt signals are sent:
A process is suspended only if its interrupt priority level is greater than that of
the current task.
The process with the lower IPL is saved in the interrupt register and is handled
(serviced)
when the IPL value falls to a certain level
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Memory management
As with the storage of data on a hard disk, processes carried out by the CPU may
also become
fragmented. To overcome this problem, memory management will determine which
processes should
be in main memory and where they should be stored (optimization).
With this method, all of the memory is made available to a single application. This
leads to inefficient
use of main memory.
Paged memory/paging
The physical memory and logical memory are divided up into the same fixed-size
memory blocks.
Physical memory blocks are known as frames and fixed-size logical memory blocks are
known as
pages. A program is allocated a number of pages that is usually just larger than
what is actually
needed. When a process is executed, process pages from logical memory are loaded
into frames in
physical memory. A page table is used; it uses page number as the index. Each
process has its own
separate page table that maps logical addresses to physical addresses
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Segmentation/segmented memory
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------
Paging
a page is a fixed-size block of memory
since the block size is fixed, it is possible that all blocks may not be fully used
– this can lead to internal fragmentation increases the risk of external
fragmentation
the user provides a single value – this means that the hardware decides the actual
page size
a page table maps logical addresses to physical addresses
the process of paging is essentially invisible to the user/programmer
procedures and any associated data cannot be separated when using paging
Segmentation
a segment is a variable-size block of memory
because memory blocks are a variable size, this reduces risk of internal
fragmentation but increases the risk of external fragmentation
the user will supply the address in two values (the segment number and the segment
size)
segmentation uses a segment map table containing segment number + offset; it maps
logical addresses to physical addresses
segmentation is essentially a visible process to a user/programmer
procedures and any associated data can be separated when using segmentation
-----------------------------------------------------------------------------------
-------------------------------------------------------------------------