OS
OS
CPU Scheduling is a process that allows one process to use the CPU while
another process is delayed due to unavailability of any resources such as I
/ O etc, thus making full use of the CPU. In short, CPU scheduling decides
the order and priority of the processes to run and allocates the CPU time
based on various parameters such as CPU usage, throughput, turnaround,
waiting time, and response time. The purpose of CPU Scheduling is to
make the system more efficient, faster, and fairer.
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as
busy as possible. Theoretically, CPU utilization can range from 0 to 100
but in a real-time system, it varies from 40 to 90 percent depending on
the load upon the system.
2. Throughput
A measure of the work done by the CPU is the number of processes being
executed and completed per unit of time. This is called throughput. The
throughput may vary depending on the length or duration of the
processes.
CPU Scheduling Criteria
3. Turnaround Time
4. Waiting Time
A scheduling algorithm does not affect the time required to complete the
process once it starts execution. It only affects the waiting time of a
process i.e. time spent by a process waiting in the ready queue.
5. Response Time
Response Time = CPU Allocation Time(when the CPU was allocated for the
first) – Arrival Time
6. Completion Time
The completion time is the time when the process stops executing, which
means that the process has completed its burst time and is completely
executed.
7. Priority
8. Predictability
A given process always should run in about the same amount of time
under a similar system load.
There are many factors that influence the choice of CPU scheduling
algorithm. Some of them are listed below.
Selecting the correct algorithm will ensure that the system will use system
resources efficiently, increase productivity, and improve user satisfaction.
Priority Scheduling
Conclusion
(A) FIFO
Overview :
A deadlock occurs when a set of processes is stalled because each
process is holding a resource and waiting for another process to acquire
another resource. In the diagram below, for example, Process 1 is holding
Resource 1 while Process 2 acquires Resource 2, and Process 2 is waiting
for Resource 1.
System Model :
Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other
resources are examples of resource categories.
The kernel keeps track of which resources are free and which are
allocated, to which process they are allocated, and a queue of
processes waiting for this resource to become available for all
kernel-managed resources. Mutexes or wait() and signal() calls can
be used to control application-managed resources (i.e. binary or
counting semaphores. )
Operations :
In normal operation, a process must request a resource before using it and
release it when finished, as shown below.
1. Request –
If the request cannot be granted immediately, the process must wait
until the resource(s) required to become available. The system, for
example, uses the functions open(), malloc(), new(), and request ().
2. Use –
The process makes use of the resource, such as printing to a printer
or reading from a file.
3. Release –
The process relinquishes the resource, allowing it to be used by
other processes.
Necessary Conditions :
There are four conditions that must be met in order to achieve deadlock
as follows.
1. Mutual Exclusion –
At least one resource must be kept in a non-shareable state; if
another process requests it, it must wait for it to be released.
3. No pre-emption –
Once a process holds a resource (i.e. after its request is granted),
that resource cannot be taken away from that process until the
process voluntarily releases it.
4. Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every
P[I] is waiting for P[(I + 1) percent (N + 1)]. (It is important to note
that this condition implies the hold-and-wait condition, but dealing
with the four conditions is easier if they are considered separately).
Deadlock Prevention :
Deadlocks can be avoided by avoiding at least one of the four necessary
conditions: as follows.
Condition-1 :
Mutual Exclusion :
Condition-2 :
Hold and Wait :
To avoid this condition, processes must be prevented from holding one or
more resources while also waiting for one or more others. There are a few
possibilities here:
Make it a requirement that all processes request all resources at the
same time. This can be a waste of system resources if a process
requires one resource early in its execution but does not require
another until much later.
Condition-3 :
No Preemption :
When possible, preemption of process resource allocations can help to
avoid deadlocks.
Condition-4 :
Circular Wait :
Deadlock Avoidance :
Deadlock Detection :
Approach-1 :
Process Termination :
There are two basic approaches for recovering resources allocated to
terminated processes as follows.
1. Stop all processes that are involved in the deadlock. This does break
the deadlock, but at the expense of terminating more processes
than are absolutely necessary.
In the latter case, many factors can influence which processes are
terminated next as follows.
2. How long has the process been running and how close it is to
completion.
3. How many and what kind of resources does the process have? (Are
they simple to anticipate and restore? )
Approach-2 :
Resource Preemption :
When allocating resources to break the deadlock, three critical issues
must be addressed:
1. Selecting a victim –
Many of the decision criteria outlined above apply to determine
which resources to preempt from which processes.
2. Rollback –
A preempted process should ideally be rolled back to a safe state
before the point at which that resource was originally assigned to
the process. Unfortunately, determining such a safe state can be
difficult or impossible, so the only safe rollback is to start from the
beginning. (In other words, halt and restart the process.)
3. Starvation –
How do you ensure that a process does not go hungry because its
resources are constantly being preempted? One option is to use a
priority system and raise the priority of a process whenever its
resources are preempted. It should eventually gain a high enough
priority that it will no longer be preempted.
Memory Management
INTRODUCTION:
Segmented Paging
A solution to the problem is to use segmentation along with paging to
reduce the size of page table. Traditionally, a program is divided into four
segments, namely code segment, data segment, stack segment and heap
segment.
Segments of a process
The size of the page table can be reduced by creating a page table for
each segment. To accomplish this hardware support is required. The
address provided by CPU will now be partitioned into segment no., page
no. and offset.
The memory management unit (MMU) will use the segment table which
will contain the address of page table(base) and limit. The page table will
point to the page frames of the segments in main memory.
Segmented Paging
1. The page table size is reduced as pages are present only for data of
segments, hence reducing the memory requirements.
4. Since the entire segment need not be swapped out, the swapping
out into virtual memory becomes easier .
Paged Segmentation
2. The page table even with segmented paging can have a lot of
invalid pages. Instead of using multi level paging along with
segmented paging, the problem of larger page table can be solved
by directly applying multi level paging instead of segmented paging.
Paged Segmentation
1. No external fragmentation
REFERENCE:
There are various constraints to the strategies for the allocation of frames:
You cannot allocate more than the total number of available frames.
Cache Memory
2. Proximity: Located very close to the CPU, often on the CPU chip
itself, reducing data access time.
Whenever CPU needs any data it searches for corresponding data in the
cache (fast process) if data is found, it processes the data according to
instructions, however, if data is not found in the cache CPU search for that
data in primary memory(slower process) and loads it into the cache. This
ensures frequently accessed data are always found in the cache and
hence minimizes the time required to access the data.
Cache Hit: When the CPU finds the required data in the cache memory,
allowing for quick access.On searching in the cache if data is found, a
cache hit has occurred.
Cache Miss: When the required data is not found in the cache, forcing
the CPU to retrieve it from the slower main memory.On searching in the
cache if data is not found, a cache miss has occurred
Although Cache and RAM both are used to increase the performance of
the system there exists a lot of differences in which they operate to
increase the efficiency of the system.
Conclusion
Cache memory is typically built into the CPU and cannot be upgraded
separately. Upgrading the CPU can increase the amount and speed of
cache memory available.
When the cache memory is full, it uses algorithms like Least Recently
Used (LRU) to replace old data with new data. The least recently accessed
data is removed to make space for the new data.
Multi-level caches are used to balance speed and cost. L1 cache is the
f.astest and most expensive per byte, so it’s small. L2 and L3 caches are
progressively larger and slower, providing a larger total cache size while
managing costs and maintaining reasonable speed.
Main Memory
Now we are discussing the concept of Logical Address Space and Physical
Address Space
Loading a process into the main memory is done by a loader. There are
two different types of loading :
Swapping
Fence Register
operating user
system program
In this approach, the operating system keeps track of the first and
last location available for the allocation of the user program
Sharing of data and code does not make much sense in a single
process environment
The Operating system can be protected from user programs with the
help of a fence register.
p1
p2
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the status of
memory partitions it is done through a data structure called a partition
table.
allocate
0k 200k
d
allocate
450k 250k
d
When it is time to load a process into the main memory and if there is
more than one free block of memory of sufficient size then the OS decides
which free block to allocate.
There are different Placement Algorithm:
1. First Fit
2. Best Fit
3. Worst Fit
4. Next Fit
Paging
Segmentation
Fragmentation
Real Storage
Explanation
Internal and external storage: Computers have two types of
physical storage: internal and external.
The Bare Machine and Resident Monitor are not directly related to the
operating system but while we study about memory management these
components are really important to study, so let’s study them one by one
and then their working. In this article, we are going to talk about two
important part of the computer system, that is Bare machine and Resident
monitor.
Initially, when the operating systems are not developed, the execution of
an instruction is done by directly on hardware without using any
interfering hardware, at that time the only drawback was that the Bare
machines accepting the instruction in only machine language, due to this
those person who has sufficient knowledge about Computer field are able
to operate a computer. so after the development of the operating system
Bare machine is referred to as inefficient.
After scheduling the job Resident monitors loads the programs one by one
into the main memory according to their sequences. One most important
factor about the resident monitor is that when the program execution
occurred there is no gap between the program execution and the
processing is going to be faster.
2. Loader
3. Device Driver
4. Interrupt Processing
Parts of Resident Monitor
Loader: The second part of the Resident monitor which is the main
part of the Resident Monitor is Loader which Loads all the necessary
system and application programs into the main memory.
Conclusion
Fixed (or static) partitioning is one of the earliest and simplest memory
management techniques used in operating systems. It involves dividing
the main memory into a fixed number of partitions at system startup, with
each partition being assigned to a process. These partitions remain
unchanged throughout the system’s operation, providing each process
with a designated memory space. This method was widely used in early
operating systems and remains relevant in specific contexts like
embedded systems and real-time applications. However, while fixed
partitioning is simple to implement, it has significant limitations, including
inefficiencies caused by internal fragmentation.
1. Contiguous
2. Non-Contiguous
2. Limit process size: Process of size greater than the size of the
partition in Main Memory cannot be accommodated. The partition
size cannot be varied according to the size of the incoming process
size. Hence, the process size of 32MB in the above-stated example
is invalid.
Clarification:
Internal fragmentation is a notable disadvantage in fixed partitioning,
whereas external fragmentation is not applicable because processes
cannot span across multiple partitions, and memory is allocated in fixed
blocks.
Key Features:
Conclusion
No, in fixed partitioning, a process larger than the partition size cannot be
accommodated because partitions are of fixed size and cannot
dynamically adjust to a process’s memory requirements.
Contiguous
Non-Contiguous
Initially, RAM is empty and partitions are made during the run-time
according to the process’s need instead of partitioning during
system configuration.
The partition size varies according to the need of the process so that
internal fragmentation can be avoided to ensure efficient utilization
of RAM.
Answer:
Answer:
Answer:
1. First Fit
2. Best Fit
3. Worst Fit
4. Next Fit
1. Paging
2. Multilevel paging
3. Inverted paging
4. Segmentation
5. Segmented paging
MMU scheme :
CPU------- MMU------Memory
Dynamic relocation using a relocation register.
2. MMU will generate a relocation register (base register) for eg: 14000
Address binding :
Address binding is the process of mapping from one address space to
another address space. Logical address is an address generated by the
CPU during execution, whereas Physical Address refers to the location in
the memory unit(the one that is loaded into memory).The logical address
undergoes translation by the MMU or address translation unit in particular.
The output of this process is the appropriate physical address or the
location of code/data in RAM.
Load time –
If it is not known at the compile time where the process will reside, then a
relocatable address will be generated. The loader translates the
relocatable address to an absolute address. The base address of the
process in main memory is added to all logical addresses by the loader to
generate an absolute address. In this, if the base address of the process
changes, then we need to reload the process again.
Execution time –
The instructions are in memory and are being processed by the CPU.
Additional memory may be allocated and/or deallocated at this time. This
is used if a process can be moved from one memory to another during
execution(dynamic linking-Linking that is done during load or run time).
e.g – Compaction.
Mapping Virtual Addresses to Physical Addresses :
In Contiguous memory allocation mapping from virtual addresses to
physical addresses is not a difficult task, because if we take a process
from secondary memory and copy it to the main memory, the addresses
will be stored in a contiguous manner, so if we know the base address of
the process, we can find out the next addresses.
2. Limit Register.
Base Register – contains the starting physical address of the process.
Limit Register -mentions the limit relative to the base address on the
region occupied by the process.
The logical address generated by the CPU is first checked by the limit
register, If the value of the logical address generated is less than the
value of the limit register, the base address stored in the relocation
register is added to the logical address to get the physical address of the
memory location.
If the logical address value is greater than the limit register, then the CPU
traps to the OS, and the OS terminates the program by giving fatal error.