Module 4-Os
Module 4-Os
MEMORY
MAIN MEMORY
Main memory is where programs and data are kept when the processor is actively
using them.
When programs and data become active, they are copied from secondary memory
into main memory where the processor can interact with them.
Main memory is associated with the processor, so moving instructions and
information into and out of the processor is extremely fast.
Main memory is also known as RAM (Random Access Memory). This memory is a
volatile memory. RAM lost its data when a power interruption occurs.
MEMORY MANAGEMENT
The operating system takes care of mapping the logical addresses to physical
addresses at the time of memory allocation to the program.
There are three types of addresses used in a program before and after memory is
allocated –
Virtual (or logical) and physical addresses are the same in compile-time and
load-time address-binding schemes.
Virtual (or logical) and physical addresses differ in execution-time address-
binding scheme.
1. Logical Address
A logical address is an address that is generated by the CPU during program
execution.
The logical address is a virtual address as it does not exist physically, and
therefore, it is also known as a Virtual Address.
This address is used as a reference to access the physical memory location by
CPU.
The set of all logical addresses generated by a program is referred to as a
Logical Address Space or Virtual Address Space.
2. Physical Address
The physical address identifies the physical location of required data in
memory.
The user never directly deals with the physical address but can access it by its
corresponding logical address.
The user program generates the logical address and thinks it is running in it,
but the program needs physical memory for its execution.
Therefore, the logical address must be mapped to the physical address by
MMU before they are used.
The set of all physical addresses corresponding to these logical addresses is
referred to as a Physical Address Space.
Difference Between Logical & Physical Address
The run-time mapping between the virtual and physical addresses is done by a
hardware device known as MMU.
o In the above diagram, the base register is termed the Relocation register.
o The Relocation Register is a special register in the CPU and is used for
the mapping of logical addresses used by a program to physical addresses
of the system's main memory.
o The value in the relocation register is added to every address that is
generated by the user process at the time when the address is sent to the
memory.
o Suppose the base is at 14000, then an attempt by the user to address
location 0 is relocated dynamically to 14000; thus access to
location 356 is mapped to 14356.
o User program always deals with the logical addresses. The Memory
Mapping unit mainly converts the logical addresses into the physical
addresses.
o There are two types of addresses that we have:
1) Logical addresses (lies in the range 0 to max).
2) Physical addresses(lies in the range R+0 to R+max for the base
value R)
What is Loading?
o To bring the program from secondary memory to main memory is called
Loading.
o It is performed by a loader.
o It is a special program that takes the input of executable files from the linker,
loads it to the main memory, and prepares this code for execution by a
computer.
o There are two types of loading in the operating system:
1) Static loading:- Loading the entire program into a fixed address. It
requires more memory space.
2) Dynamic loading:- The entire program and all data of a process must be
in physical memory for the process to execute. So, the size of a process is
limited to the size of physical memory. To gain proper memory utilization,
dynamic loading is used. In dynamic loading, a routine is not loaded until
it is called. All routines are residing on disk in a relocatable load format.
One of the advantages of dynamic loading is that unused routine is never
loaded. This loading is useful when a large amount of code is needed to
handle it efficiently.
What is Linking?
o Establishing the linking between all the modules or all the functions of the
program in order to continue the program execution is called linking.
o It is done by a linker.
o Linking is a process of collecting and maintaining pieces of code and data into
a single file.
o It takes object modules from the assembler as input and forms an executable
file as output which is the input for the loader.
o There are two types of linking in the operating system:
1) Static linking:- In static linking, the linker combines all necessary
program modules into a single executable program. So there is no runtime
dependency. Some operating systems support only static linking, in which
system language libraries are treated like any other object module.
2) Dynamic linking:- The basic concept of dynamic linking is similar to
dynamic loading. In dynamic linking, “Stub” is included for each
appropriate library routine reference. A stub is a small piece of code.
When the stub is executed, it checks whether the needed routine is already
in memory or not. If not available then the program loads the routine into
memory.
Static Library & Shared Library
On the basis of execution and storage of this code library is classified in two types
i.e. Static library and Shared library.
Static Library
Static library is the library in which all the code to execute the file is in one
executable file and this file get copied into a target application by a compiler,
linker, or binder, producing an object file and a stand-alone executable.
Shared Library
Shared library is the library in which only the address of library is mentioned
in the target program while all the functions and code of library are in a
special place in memory space, and every program can access them, without
having multiple copies of them.
Overlays
o The concept of overlays is that whenever a process is running it will not use
the complete program at the same time, it will use only some part of it.
o Then overlays concept says that whatever part you required, you load it and
once the part is done, then you just unloads it, means just pull it back and get
the new part you required and run it.
o Advantages:-
i. Reduce memory requirement
ii. Reduce time requirement
o Disadvantages:-
i. Overlap map must be specified by programmer
ii. Programmer must know memory requirement
iii. Overlapped module must be completely disjoint
iv. Programming design of overlays structure is complex and not possible in
all cases
o Example:-
The best example of overlays is assembler. Consider the assembler has 2
passes, 2 pass means at any time it will be doing only one thing, either the 1st
pass or the 2nd pass. This means it will finish 1st pass first and then 2nd pass.
Let assume that available main memory size is 150KB and total code size is
200KB.
Pass 1.......................70KB
Pass 2.......................80KB
Symbol table.................30KB
Common routine...............20KB
Thus, we can define two overlays:-
There are two important concepts in the process of swapping which are as follows:
i. Swap In:- The method of removing a process from secondary memory (Hard
Drive) and restoring it to the main memory (RAM ) for execution is known
as the Swap In method.
ii. Swap Out:- It is a method of bringing out a process from the main
memory(RAM) and sending it to the secondary memory(hard drive) so that
the processes with higher priority or more memory consumption will be
executed is known as the Swap Out method.
Note:- Swap In and Swap Out method is done by Medium Term Scheduler(MTS).
A swapping allows more processes to be run and can be fit into memory at one
time. Thus it increases the degree of multiprogramming.
Swapping is also known as Roll-out Roll in, because if a higher priority process
arrives and wants service, the memory manager can swap out the lower priority
process and then load and execute the higher priority process. After finishing
higher priority work, the lower priority process swapped back in memory and
continued to the execution process.
Whenever the CPU scheduler decides to execute a process, it calls the dispatcher.
The dispatcher checks to see whether the next process in the ready queue is in
memory. If it is not, and if there is no free memory region, the dispatcher swaps
out a process currently in memory and swaps in the desired process. It then
reloads registers and transfers control to the selected process.
o Advantages:-
1) Simple to implement.
2) Easy to manage and design.
3) In a Single contiguous memory management scheme, once a
process is loaded, it is given full processor's time, and no other
processor will interrupt it.
o Disadvantages:-
1) Wastage of memory space due to unused memory as the process
is unlikely to use all the available memory space.
2) It cannot be executed if the program is too large to fit the entire
available main memory space.
3) It does not support multiprogramming, i.e., it cannot handle
multiple programs simultaneously.
Multiple-Partition Allocation
o The single Contiguous memory management scheme is inefficient as it
limits computers to execute only one program at a time resulting in
wastage in memory space and CPU time.
o The problem of inefficient CPU use can be overcome using
multiprogramming that allows more than one program to run
concurrently. To switch between two processes, the operating systems
need to load both processes into the main memory.
o The operating system needs to divide the available main memory into
multiple parts to load multiple processes into the main memory. Thus
multiple processes can reside in the main memory simultaneously. This
method is known as Multiple-partition allocation.
o The multiple partitioning schemes can be of two types:-
1) Fixed/Static Partitioning
2) Variable/Dynamic Partitioning
Fixed/Static Partitioning
The main memory is divided into several fixed-sized partitions. The
partitions can be of equal or unequal sizes.
Here partitions are made before execution or during system configure.
Each partition can hold a single process. The number of partitions
determines the degree of multiprogramming, i.e., the maximum
number of processes in memory.
Whenever we have to allocate a process into memory then a free
partition that is big enough to hold the process is found. Then the
memory is allocated to the process. If there is no free space available
then the process waits in the queue to be allocated memory.
In fixed partitioning,
1. The partitions cannot overlap.
2. A process must be contiguously present in a partition for the
execution.
Advantages:-
1. Simple to implement.
2. Easy to manage and design.
Disadvantages:-
1. Internal Fragmentation:- If the size of the process is lesser then
the total size of the partition then some size of the partition get
wasted and remain unused. This is wastage of the memory and
called internal fragmentation. As shown in the image below, the
4 MB partition is used to load only 3 MB process and the
remaining 1 MB got wasted.
2. External Fragmentation:- The total unused space of various
partitions cannot be used to load the processes even though there
is space available but not in the contiguous form. As shown in
the image below, the remaining 1 MB space of each partition
cannot be used as a unit to store a 4 MB process. Despite of the
fact that the sufficient space is available to load the process,
process will not be loaded.
3. Limitation on the size of the process:- If the process size is
larger than the size of maximum sized partition then that process
cannot be loaded into the memory. Therefore, a limitation can be
imposed on the process size that is it cannot be larger than the
size of the largest partition.
4. Degree of multiprogramming is less:- By Degree of multi
programming, we simply mean the maximum number of
processes that can be loaded into the memory at the same time.
In fixed partitioning, the degree of multiprogramming is fixed
and very less due to the fact that the size of the partition cannot
be varied according to the size of processes.
Variable/Dynamic Partitioning
Dynamic partitioning tries to overcome the problems caused by fixed
partitioning. In this technique, the partition size is not declared initially.
It is declared at the time of process loading.
The first partition is reserved for the operating system. The remaining
space is divided into parts.
The size of each partition will be equal to the size of the process. The
partition size varies according to the need of the process so that the
internal fragmentation can be avoided.
Advantages:-
1. No Internal Fragmentation:- Given the fact that the partitions in
dynamic partitioning are created according to the need of the
process, It is clear that there will not be any internal
fragmentation because there will not be any unused remaining
space in the partition.
2. No Limitation on the size of the process:- In Fixed partitioning,
the process with the size greater than the size of the largest
partition could not be executed due to the lack of sufficient
contiguous memory. Here, In Dynamic partitioning, the process
size can't be restricted since the partition size is decided
according to the process size.
3. Degree of multiprogramming is dynamic:- Due to the absence of
internal fragmentation, there will not be any unused space in the
partition hence more processes can be loaded in the memory at
the same time.
Disadvantages:-
1. External Fragmentation:- Absence of internal fragmentation
doesn't mean that there will not be external fragmentation. Let's
consider three processes P1 (5 MB) and P3 (3 MB) and P5 (8
MB) are being loaded in the respective partitions of the main
memory. After some time P1 and P3 got completed and their
assigned space is freed. Now there are two unused partitions (5
MB and 3 MB) available in the main memory but they cannot be
used to load an 8 MB process (P5) in the memory since they are
not contiguously located.
First fit: Allocate the first hole that is big enough. Searching can start either at
the beginning of the set of holes or where the previous first-fit search ended.
We can stop searching as soon as we find a free hole that is large enough.
Best fit: Allocate the smallest hole that is big enough. We must search the
entire list, unless the list is ordered by size. This strategy produces the smallest
leftover hole.
Worst fit: Allocate the largest hole. Again, we must search the entire list,
unless it is sorted by size. This strategy produces the largest leftover hole,
which may be more useful than the smaller leftover hole from a best-fit
approach.
PAGING
o Example:- Consider a process is divided into 4 pages P0, P1, P2 and P3.
CPU always generates a logical address. A physical address is needed to
access the main memory.
CPU generates a logical address consisting of two parts-
i. Page Number:- It specifies the specific page of the process
from which CPU wants to read the data.
ii. Offset:- It specifies the specific word on the page that CPU
wants to read.
For the page number generated by the CPU, Page Table provides
the corresponding frame number (base address of the frame) where
that page is stored in the main memory.
The frame number combined with the offset forms the required
physical address.
i. Frame number:- It specifies the specific frame where the
required page is stored.
ii. Offset:- It specifies the specific word that has to be read
from that page.
A page table base register (PTBR) holds the base address for the
page table of the current process.
The following diagram illustrates the translation of logical address
into physical address-
Formula for Address Calculation:-
Frame Table: It stores the details about the availability and allocation of frames.
Disadvantages of Paging:- It increases the effective access time due to increased
number of memory accesses- One memory access is required to get the frame
number from the page table. Another memory access is required to get the word
from the page.
Translation Lookaside Buffer (TLB)
o Translation Lookaside Buffer (TLB) is a solution that tries to reduce
the effective access time.
o Being hardware, the access time of TLB is very less as compared to
the main memory.
o Translation Lookaside Buffer (TLB) consists of two columns-
1) Page Number
2) Frame Number
o Note:-
Initially, TLB is empty. So, TLB misses are frequent. With every
access from the page table, TLB is updated. After some time, TLB
hits increases and TLB misses reduces.
The time taken to update TLB after getting the frame number from
the page table is negligible. TLB is then again updated with the
currently running process.
The time taken to update TLB after getting the frame number from
the page table is negligible. Also, TLB is updated in parallel while
fetching the word from the main memory.
o Advantages:-
1. TLB reduces the effective access time.
2. Only one memory access is required when TLB hit occurs.
o Disadvantages:-
1. TLB can hold the data of only one process at a time.
2. When context switches occur frequently, the performance of TLB
degrades due to low hit ratio.
3. As it is a special hardware, it involves additional cost.
Protection in Paging
o A bit or bits can be added to the page table to classify a page as read-
write, read-only, read-write-execute, or some combination of these sorts
of things. Then each memory reference can be checked to ensure it is
accessing the memory in the appropriate mode.
o Valid-Invalid bit: Each page table entry has a valid-invalid bit (v->in-
memory, i-> not-in-memory). Initially it is set to „i‟ for entries.
o During address translation, if valid-invalid bit is „i‟ then it could be:
i. Illegal reference (Outside process address space) -> aborts the
process.
ii. Legal reference but not in memory (Page fault) -> brings the page.
o Many processes do not use all of the page table available to them,
particularly in modern systems with very large potential page tables.
Rather than waste memory by creating a full-size page table for every
process, some systems use a page-table length register, PTLR, to specify
the length of the page table.
Sharing in Paging
o An advantage of paging is the possibility of sharing common code. This
is important in a time sharing environment. If the code is reentrant code
(or pure code); it can be shared.
o Reentrant code is non-self-modifying code; it never changes during
execution. Thus, two or more processes can execute the same code at the
same time. Each process has its own copy of registers and data storage to
hold the data for the process‟s execution. The data for different processes
will be different.
o Only one copy of the editor need to be kept in physical memory. Each
user‟s page table maps onto the same physical copy of the editor, but data
pages are mapped onto different frames. Other heavily used programs
such as compilers, window systems, run – time libraries, database
systems can also be shared. To be sharable, the code must be reentrant.
o The sharing of memory among processes on a system is similar to the
sharing of the address space of the task by threads. Shared memory can
also be described as a method of inter process communication. Some
OS‟s implement shared memory using shared pages.
SEGMENTATION
Segment Table
o Segment table is a table that stores the information about each segment
of the process.
o Segment table is stored as a separate segment in the main memory.
o Segment table has two columns:
i. Limit indicates the length or size of the segment.
ii. Base indicates the base address or starting address of the segment
in the main memory.
o Segment Table Base Register (STBR) stores the base address of the
segment table.
o Segment Table Length Register (STLR) indicates the number of
segments used by a program.
Translating Logical Address into Physical Address-
o CPU always generates a logical address. A physical address is needed to
access the main memory.
o Following steps are followed to translate logical address into physical
address-
1) CPU generates a logical address consisting of two parts- Segment
number and Segment offset; Segment Number specifies the
specific segment of the process from which CPU wants to read
the data. Segment Offset specifies the specific word in the
segment that CPU wants to read.
2) For the generated segment number, corresponding entry is located
in the segment table. Then, segment offset is compared with the
limit (size) of the segment. Now, two cases are possible-
i. Case-01: Segment Offset >= Limit: If segment offset is found
to be greater than or equal to the limit, a trap is generated.
ii. Case-02: Segment Offset < Limit: If segment offset is found
to be smaller than the limit, then request is treated as a valid
request. The segment offset must always lie in the range [0,
limit-1]. Then, segment offset is added with the base address
of the segment. The result obtained after addition is the
address of the memory location storing the required word.
o Advantages:-
1. It allows to divide the program into modules which provides
better visualization.
2. Segment table consumes less space as compared to Page Table
in paging.
3. It solves the problem of internal fragmentation.
o Disadvantages:-
1. There is an overhead of maintaining a segment table for each
process.
2. The time taken to fetch the instruction increases since now two
memory accesses is required.
3. Segments of unequal size are not suited for swapping.
4. It suffers from external fragmentation as the free space gets
broken down into smaller pieces with the processes being loaded
and removed from the main memory.
Protection in Segmentation
o With each entry in segment table associate: Validation bit:- 0 for
illegal segment and 1 for legal segment
o Protection bits associated with segments for Read/write/execute
privileges
Sharing in Segmentation
o An advantage of segmentation is the possibility of sharing common
code.
o Segment-table-entries of two different processes point to the same
physical locations.
VIRTUAL MEMORY
PAGE REPLACEMENT
Belady’s Anomaly
It states that for some page reference strings, the page fault rate
may increases as the number of allocated frames increases.
For example consider the page reference string: 1, 2, 3, 4, 1, 2,
5, 1, 2, 3, 4, 5.
Advantages:-
1) Simple to understand and implement
2) Does not cause more overhead
Disadvantages:-
1) Poor performance
2) Doesn‟t use the frequency of the last used time and just simply
replaces the oldest page.
3) Suffers from Belady‟s anomaly.
2. Least Recently Used Page Replacement (LRU)
o It keeps the track of usage of pages over a period of time.
o This algorithm works on the basis of the principle of locality of a
reference (Caching Technique) which states that a program has a
tendency to access the same set of memory locations repetitively over a
short period of time.
o So pages that have been used heavily in the past are most likely to be
used heavily in the future also.
o In this algorithm, when a page fault occurs, then the page that has not
been used for the longest duration of time (that is least recently used one)
is replaced by the newly requested page.
o Example:- Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1,
3, 1, 6, 3, 2, 3 with frame size 4(i.e. maximum 4 pages in a frame).
Advantages:-
1) Efficient
2) Doesn't suffer from Belady‟s Anomaly.
Disadvantages:-
1) Complex Implementation.
2) Expensive.
3) Requires hardware support.
Advantages:-
1) Excellent efficiency
2) Used as the benchmark for other algorithms
Disadvantages:-
1) More time consuming
2) Need future awareness of the programs, which is not possible
every time
Thrashing
To know more clearly about thrashing, first, we need to know about page
fault and swapping.
Page fault: We know every program is divided into some pages. A
page fault occurs when a program attempts to access data or code in
its address space but is not currently located in the system RAM.
Swapping: Whenever a page fault happens, the operating system
will try to fetch that page from secondary memory and try to swap
it with one of the pages in RAM. This process is called swapping.
Thrashing is when the page fault and swapping happens very frequently
at a higher rate, and then the operating system has to spend more time
swapping these pages. This state in the operating system is known as
thrashing. Because of thrashing, the CPU utilization is going to be
reduced or negligible.