OS (BCA SEM3) Unit3
OS (BCA SEM3) Unit3
UNIT – 3
Operating System
MAYUR PARMAR
(ASSISTANT PROFESSOR OF SMT V. V, SHAH
M.SC(CA&IT) INSTITUTE, MODASA
Assistant professor
(Dimpi Gor) Page 1
Operating System
Memory Allocation
Memory allocation is an action of assigning the physical or the virtual memory address space to
a process (its instructions and data). The two fundamental methods of memory allocation are
static and dynamic memory allocation.
To get a process executed it must be first placed in the memory. Assigning space to a process in
memory is called memory allocation. Memory allocation is a general aspect of the term binding.
We have two types of memory allocation or we can say two methods of binding, static and
dynamic binding.
Static memory allocation is performed when the compiler compiles the program and generates
object files. The linker merges all these object files and creates a single executable file. The
loader loads this single executable file in the main memory, for execution. In static memory
allocation, the size of the data required by the process must be known before the execution of the
process initiates.
Assistant professor
(Dimpi Gor) Page 2
Operating System
If the data sizes are not known before the execution of the process, then they have to be guessed.
If the data size guessed is larger than the required, then it leads to wastage of memory.
The static memory allocation method does not need any memory allocation operation during the
execution of the process. All the memory allocation operation required for the process is done
before the execution of the process has started. So, it leads to faster execution of a process.
Static memory allocation provides more efficiency when compared to dynamic memory
allocation.
1. Static memory allocation provides an efficient way of assigning the memory to a process.
2. Static memory allocation provides faster execution, as at the time of execution it doesn’t have to
waste time in allocation memory to the program.
1. Static memory allocation leads to memory wastage. As it estimates the size of memory required
by the program. So, if the estimated size is larger, it will lead to memory wastage else if the
estimated size is smaller, then the program will execute inappropriately.
Dynamic memory allocation is performed while the program is in execution. Here, the memory
is allocated to the entities of the program when they are to be used for the first time while the
program is running.
The actual size, of the data required, is known at the run time so, it allocates the exact memory
space to the program thereby, reducing the memory wastage.
Allocating memory dynamically creates an overhead over the system. Some allocation
operations are performed repeatedly during the program execution creating more overheads,
leading in slow execution of the program.
Dynamic memory allocation does not require special support from the operating system. It is the
responsibility of the programmer to design the program in a way to take advantage of dynamic
memory allocation method.
1. Dynamic memory allocation provides a flexible way of assigning the memory to a process.
2. Dynamic memory allocation reduces the memory wastage as it assigns memory to a process
during the execution of that program. So, it is aware of the exact memory size required by the
program.
Assistant professor
(Dimpi Gor) Page 3
Operating System
3. If the program is large then the dynamic memory allocation is performed on the different parts of
the program. Memory is assigned to the part of a program that is currently in use. This also
reduces memory wastage and indeed improves system performance.
1. Dynamic memory allocation method has an overhead of assigning the memory to a process
during the time of its execution.
2. Sometimes the memory allocation actions are repeated several times during the execution of the
program which leads to more overheads.
3. The overheads of memory allocation at the time of its execution slowdowns the execution to
some extent.
Virtual Memory
Virtual Memory is a storage mechanism which offers user an illusion of having a very big main
memory. It is done by treating a part of secondary memory as the main memory. In Virtual
memory, the user can store processes with a bigger size than the available main memory.
• Whenever your computer doesn’t have space in the physical memory it writes what it
needs to remember to the hard disk in a swap file as virtual memory.
Page faults
When a process tries to reference a page not currently present in RAM, the processor treats this
invalid memory reference as a page fault and transfers control from the program to the operating
system. The operating system must:
Demand paging:
In computer operating systems, demand paging is a method of virtual memory management. In
a system that uses demand paging, the operating system copies a disk page into physical memory
only if an attempt is made to access it and that page is not already in memory (i.e., if a page
fault occurs). It follows that a process begins execution with none of its pages in physical
memory, and many page faults will occur until most of a process's working set of pages are
located in physical memory. This is an example of a lazy loading technique.
Demand paging follows that pages should only be brought into memory if the executing process
demands them. This is often referred to as lazy evaluation as only those pages demanded by the
process are swapped from secondary storage to main memory. Contrast this to pure swapping,
where all memory for a process is swapped from secondary storage to main memory during the
process startup.
Commonly, to achieve this process a page table implementation is used. The page table
maps logical memory to physical memory. The page table uses a bitwise operator to mark if a
page is valid or invalid. A valid page is one that currently resides in main memory. An invalid
page is one that currently resides in secondary memory. When a process tries to access a page,
the following steps are generally followed:
Assistant professor
(Dimpi Gor) Page 5
Operating System
• Check if the memory reference is a valid reference to a location on secondary memory. If
not, the process is terminated (illegal memory access). Otherwise, we have to page in the
required page.
• Schedule disk operation to read the desired page into main memory.
• Restart the instruction that was interrupted by the operating system trap.
Advantages
Demand paging, as opposed to loading all pages immediately:
• Disadvantages
• Low-cost, low-power embedded systems may not have a memory management unit that
supports page replacement.
• Memory management with page replacement algorithms becomes slightly more complex.
• Possible security risks, including vulnerability to timing attacks
• Thrashing which may occur due to repeated page faults.
Assistant professor
(Dimpi Gor) Page 6
Operating System
There are two main aspects of virtual memory, Frame allocation and Page Replacement. It is
very important to have the optimal frame allocation and page replacement algorithm. Frame
allocation is all about how many frames are to be allocated to the process while the page
replacement is all about determining the page number which needs to be replaced in order to
make space for the requested page.
Fragmentation
Fragmentation commonly occurs when old files are opened, modified and subsequently saved.
One example of this, would be where a previously saved file, let's say, a document, is opened
and added to. This will cause the file to be larger in physical space than when it was first saved.
The operating system will then break the file into 2 or more pieces, and store those pieces
(fragments) in different parts of the storage area.
The file system, such as File Allocation Table (FAT) or NTFS, would then keep a record of
Assistant professor
(Dimpi Gor) Page 7
Operating System
where the different fragments of the file are stored.
When the operating system requires the file again, it will query the file system (FAT/NTFS/or
other) to find out where the different fragments of the file are located on the partition (drive).
Defragmentation
As noted above, the process of defragmentation rejoins the fragmented parts of a file. It loads the
file fragments and then saves them in consecutive parts of the storage.
The process of defragmenting can be time consuming, but it is one of the easiest ways to increase
the performance of your computer. The frequency at which a PC should be defragmented will
directly depend on the amount of usage area.
Data transfer between the CPU and I/O devices can be done in variety of modes. These are
three possible modes:
1. Programmed I/O
2. Interrupt initiated I/O
3. Direct Memory Access (DMA)
In this mode the data transfer is initiated by the instructions written in a computer program. An
input instruction is required to store the data from the device to the CPU and a store instruction
is required to transfer the data from the CPU to the device.
Data transfer through this mode requires constant monitoring of the peripheral device by the
CPU and also monitor the possibility of new transfer once the transfer has been initiated. Thus
CPU stays in a loop until the I/O device indicates that it is ready for data transfer.
Thus programmed I/O is a time consuming process that keeps the processor busy needlessly
and leads to wastage of the CPU cycles.
This can be overcome by the use of an interrupt facility. This forms the basis for the Interrupt
Initiated I/O.
Assistant professor
(Dimpi Gor) Page 8
Operating System
This mode uses an interrupt facility and special commands to inform the interface to issue the
interrupt command when data becomes available and interface is ready for the data transfer. In
the meantime CPU keeps on executing other tasks and need not check for the flag. When the
flag is set, the interface is informed and an interrupt is initiated.
This interrupt causes the CPU to deviate from what it is doing to respond to the I/O transfer.
The CPU responds to the signal by storing the return address from the program counter (PC)
into the memory stack and then branches to service that processes the I/O request. After the
transfer is complete, CPU returns to the previous task it was executing.
The branch address of the service can be chosen in two ways known as vectored and non-
vectored interrupt. In vectored interrupt, the source that interrupts, supplies the branch
information to the CPU while in case of non-vectored interrupt the branch address is assigned
to a fixed location in memory.
Assistant professor
(Dimpi Gor) Page 9
Operating System
CPU cannot do any work until the transfer is CPU can do any other work until it is
complete as it has to stay in the loop to continuously interrupted by the command indicating
monitor the peripheral device. the readiness of device for data transfer
Assistant professor
(Dimpi Gor) Page 10