0% found this document useful (0 votes)
89 views25 pages

Unit III Ssos Notes

This document provides an overview of operating systems, focusing on process concepts, states, and storage management strategies. It explains the definitions of processes, their states, transitions, and the role of interrupts in managing system resources. Additionally, it discusses memory allocation techniques, including contiguous and non-contiguous memory allocation, along with various memory management strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views25 pages

Unit III Ssos Notes

This document provides an overview of operating systems, focusing on process concepts, states, and storage management strategies. It explains the definitions of processes, their states, transitions, and the role of interrupts in managing system resources. Additionally, it discusses memory allocation techniques, including contiguous and non-contiguous memory allocation, along with various memory management strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

lOMoARcPSD|38241633

Unit-III - SSOS notes

B.Sc. Computer Science (Bharathiar University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Pavithra Chinnappan ([email protected])
lOMoARcPSD|38241633

DEPARTMENT OF COMPUTER APPLICATIONS


CORE 6: SYSTEM SOFTWARE & OPERATING SYSTEM
UNIT-III
SYLLABUS
What is an Operating System? - Process Concepts: Definition of Process - Process
States - Process States Transition - Interrupt Processing - Interrupt Classes - Storage
Management: Real Storage: Real Storage Management Strategies – Contiguous
versus Non-contiguous storage allocation – Single User Contiguous Storage
allocation- Fixed partition multiprogramming – Variable partition
multiprogramming.

OPERATING SYSTEM
An operating system (OS) is system software. It manages computer
hardware and software resources. It provides common services for computer
programs. Time-sharing operating systems schedule tasks for efficient use of the
system. It may also include accounting software for cost allocation of processor
time, mass storage, printing, and other resources.

The operating system acts as an intermediary between programs and the computer
hardware. The application code is usually executed directly by the hardware. It is
frequently making system calls to an OS function or is interrupted by it. Operating
systems are found on many devices that contain a computer – from cellular
phones and video game consoles to web servers and supercomputers. For hardware
functions such as input and output and memory allocation.

PROCESS- DEFINITIONS OF PROCESS


1. A program in execution
2. An asynchronous activity
3. The “animated sprit” of a procedure
4. The “locus of control” of a procedure in execution

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

5. That entity to which processors are assigned


6. The “dispatchable” unit

A process is a program at the time of execution. The process is more than the program
code. It includes the program counter, the process stack, and the content of the process
register, etc. The purpose of the process stack is to store temporary data, such as
subroutine parameters, return address and temporary variables.

An instance of a program running on a computer. The entity that can be assigned to


and executed on a processor. A unit of activity characterized by the execution of a
sequence of instructions, a current state, and an associated set of system resources. An
instance of a program running on a computer. The entity that can be assigned to and
executed on a processor. A unit of activity characterized by the execution of a sequence
of instructions, a current state, and an associated set of system resources.

PROCESS STATES

Start: The process is being created.

Running: The process is being executed.

Waiting: The process is waiting for some event to occur.

Ready: The process is waiting to be assigned to a processor.

Terminate: The process has finished execution.

Many processes can be running in any processor at any time. But many processes may
be in ready queue waiting for states. Consider the figure below depicts the state
diagram of the process states.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

In a uniprocessor system only one process may be running at a time. It several may be
ready and several blocked. The operating system maintains a ready list of ready
processes and a blocked list of blocked processes. The ready list is maintained in
priority order. The next process to receive a processor is the first one in the list (i.e.,
the process with the highest priority). The blocked list is typically unordered -
processes do not become unblocked (i.e., ready) in priority order. Unblock in the order
in which the events they are waiting for occur.

PROCESS STATE TRANSITIONS

When a user runs a program, processes are created and inserted into the ready list. A
process moves toward the head of the list as other processes complete their turns using
a processor. When a process reaches the head of the list, a processor becomes available,
that process is given a processor.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

New to Ready

The operating system creates a process. And prepares the process to be executed by,
then the operating system moves the process into the ready queue.

Ready to Running

When it is time to select a process to run, the operating system selects one of the jobs
for the ready queue and move the processes from the ready state to the running state.
It is said to make a state transition from the ready state to the running state. The act of
assigning a processor to the first process on the ready list is called dispatching. It is
performed by a system entity called the dispatcher. The state transition is,

dispatch (process name): ready->running

Running to Terminated

When the execution of a process has completed then the operating system terminates
that process from running state. Sometimes the operating system terminates the
processes for some other reasons also include time limit exceeded, memory
unavailable access violation, protection error, I/O failure, data misuse and so on.

Running to Ready

When the time slot for the processor expires or if the processor receives an interrupt
signal, then the operating system shifts the running process to the ready state.
Processes that are in the ready or running states are said to be awake. To prevent any
one process from monopolizing (controlling) the system, either accidentally or
maliciously the operating system sets a hardware interrupting clock (also called an
interval timer) to allow a process to run for a specific time interval or quantum. The
state transition is,

timerrunout (process name): running->ready

For example, process P1 is being executed by the processor, at that time processor, P2
generates an interrupt signal to the processor. Then the processor compares the
priorities of process P1 and P2. If P1>P2 then the processor continues executing P1.
Otherwise, the processor switches to process P2, and process P1 is moved to the ready
state.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Running to Waiting

A process is put into the waiting state if the process needs an event to occur, or an I/O
device is to read. The operating system does not provide the I/O or event immediately
then the process is moved to the waiting state by the operating system.

Waiting to Ready

A process in the blocked state is moved to the ready state when the event for which it
has been waiting occurs.

For example, a process is in running state needs an I/O device, then the process
moved to wait or blocked state. When the I/O device is provided by the operating
system, the process moved to the ready state from waiting or blocked state.

Running to Block

If a running process initiates an input/output operation before its quantum expires.


The running process voluntarily relinquishes the CPU. (i.e the process blocks itself
pending the completion of the input/output operation). The state transition is,

block (process name): running->blocked

Block to Ready

The only other allowable state transition in three-state model occurs when an I/O
operation (or some other event the process is waiting for) completes. In this case, the
operating system transitions the process from the blocked to the ready state. The state
transition is,

wakeup (process name): blocked->ready

INTERRUPT

Interrupts enable software to respond to signals from hardware. The operating system
may specify a set of instructions, called an interrupt handler to be executed in response
to each type of interrupt. This allows the operating system to gain control of the
processor to manage system resources. Interrupt is called a trap. Synchronous with
the operation of the process. For example, dividing by zero or referencing protected
memory. Interrupts may also be caused by some event that is unrelated to a process's

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

current instruction. Asynchronous with the operation of the process. For example, the
keyboard generates an interrupt when a user presses a key. The mouse generates an
interrupt when it moves or when one of its buttons is pressed. Interrupts provide a
low-overhead means of gaining the attention of a processor. Polling is an alternative
approach for interrupts. Processor repeatedly requests the status of each device.
Increases in overhead as the complexity of the system increases.

Difference between polling and interrupts

A simple example microwave oven. A chef may either set a timer to expire after an
appropriate number of minutes (the timer sounding after this interval interrupts the
chef) The chef may regularly peek through the oven's glass door and watch as the
roast cooks (this kind of regular monitoring is an example of polling).

Interrupt processing

Handling Interrupts

1. The interrupt line, an electrical connection between the mainboard and a processor.
It becomes active—devices such as timers, peripheral cards and controllers send
signals that activate. The interrupt line to inform a processor that an event has
occurred (e.g., a period of time has passed or an I/O request has completed). Most
processors contain an interrupt controller that orders interrupts according to their
priority so that important interrupts are serviced first. Other interrupts are queued
until all higher-priority interrupts have been serviced.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

2. After the interrupt line becomes active, the processor completes execution of the
current instruction, then pauses the execution of the current process. To pause process
execution, the processor must save enough information. The process can be resumed
at the correct place and with the correct register information.

3. The processor then passes control to the appropriate interrupt handler. Each type
of interrupt is assigned a unique value that the processor uses as an index into the
interrupt vector, which is an array of pointers to interrupt handlers. The interrupt
vector is located in memory that processes cannot access, so that processes cannot
modify its contents.

4. The interrupt handler performs appropriate actions based on the type of interrupt.

5. After the interrupt handler completes, the state of the interrupted process is
restored.

6. The interrupted process (or some other "next process") executes. It is the
responsibility of the operating system to determine whether the interrupted process
or some other "next process" executes.

Interrupt classes

There are six interrupt classes. These are,

1. SVC (Supervisor call) interrupts


2. I/O interrupts
3. External interrupts
4. Restart interrupts
5. Program check interrupts
6. Machine check interrupts

SVC (Supervisor call) interrupts

These are initiated by a running process that executes the SVC instruction. It is a user-
generated request for a particular system service such as performing input/output. It
helps keep the operating system secure from the users. A user may not arbitrarily
enter the operating system. The user must request a service through as SVC.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

I/O interrupts

These are initiated by the input/output hardware. They signal to the CPU that the
status of a device has changed. I/O interrupts are caused when an I/O operation
completes, when an I/O error occurs.

External interrupts

These are caused by various events including the expiration of a quantum on an


interrupting clock. The pressing of the console’s interrupt key by the operator or the
receipt of a signal from another processor on a multiprocessor system.

Restart interrupts

These occur when the operator presses the console’s restart button. When a restart
SIGP (Signal Processor) instruction arrives from another processor on a
multiprocessor system.

Program check interrupts

These are caused by a wide range of problems. It may occur as a program’s machine
language instructions are executed. Example, division by zero, arithmetic overflow or
underflow. Data in wrong format attempt to reference a memory location beyond the
limits of real storage memory.

Machine check interrupts

These are caused by malfunctioning (not working) hardware.

STORAGE MANAGEMENT

The term storage management encompasses the technologies and processes


organizations use to maximize or improve the performance of their data storage
resources. It is a broad category that includes virtualization, replication, mirroring,
security, compression, traffic analysis, process automation, storage provisioning and
related techniques. The memory management function keeps track of the status of
each memory location, either allocated or free. It determines how memory is allocated
among competing processes, deciding which gets memory, when they receive it, and
how much they are allowed. When memory is allocated, it determines which memory

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

locations will be assigned. It tracks when memory is freed or unallocated and updates
the status.

Storage Hierarchy

Hierarchical memory organization

Programs and data must be in main memory before the system can execute or
reference them. Those that the system does not need immediately may be kept in
secondary storage until needed, then brought into main memory for execution or
reference.

Secondary storage media, such as tape or disk, are generally far less costly per bit than
main memory and have much greater capacity. Main storage may generally be
accessed much faster than secondary storage. The memory hierarchy contains levels
characterized by the speed and cost of memory in each level. Systems move programs
and data back and forth between the various levels. The cache is a high-speed storage
that is much faster than main storage.

Cache memory imposes one more level of data transfer on the system. Cache storage
is extremely expensive compared with main storage. Programs in main memory are
transferred to the cache before being executed—executing programs from cache is
much faster than from main memory.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Storage management strategies

Memory management strategies are designed to obtain the best possible use of main
memory.

They are divided into:

1. Fetch strategies
2. Placement strategies
3. Replacement strategies

Fetch strategies

It determines when to move the next piece of a program or data to main memory from
secondary storage.

It has divided them into two types,

1. Demand fetch strategies


2. Anticipatory fetch strategies

Demand fetch strategy: The system places the next piece of program or data in main
memory when a running program references it. Designers believed that because
cannot in general predict the paths of execution that programs will take, the overhead
involved in making guesses would far exceed expected benefits.

Anticipatory fetch strategies: Today, however, many systems have increased


performance by employing anticipatory fetch strategies, which attempt to load a piece
of program or data into memory before it is referenced.

Placement strategies

It determines where in main memory the system should place incoming program or
data pieces. Consider first-fit, best-fit, and worst-fit memory placement strategies.
program and data can be divided into fixed-size pieces called pages. It can be placed
in any available page frame.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Replacement strategies

When memory is too full to accommodate a new program, the system must remove
some (or all) of a program or data that currently resides in memory. The system's
replacement strategy determines which piece to remove.

DEFINITION OF CONTIGUOUS MEMORY ALLOCATION

The operating system and the user’s processes both must be accommodated in the
main memory. The main memory is divided into two partitions.

1. at one partition the operating system resides


2. at other the user processes reside

In usual conditions, the several user processes must reside in the memory at the same
time. It is important to consider the allocation of memory to the processes. The
Contiguous memory allocation is one of the methods of memory allocation. In
contiguous memory allocation, when a process requests for the memory. A single
contiguous section of memory blocks is assigned to the process according to its
requirement.

DEFINITION NON-CONTIGUOUS MEMORY ALLOCATION

The Non-contiguous memory allocation allows a process to acquire the several


memory blocks at the different location in the memory according to its requirement.
The non-contiguous memory allocation also reduces the memory wastage caused due
to internal and external fragmentation. As it utilizes the memory holes, created during
internal and external fragmentation.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Paging and segmentation are the two ways which allow a process physical address
space to be non-contiguous. In non-contiguous memory allocation, the process is
divided into blocks (pages or segments) which are placed into the different area of
memory space according to the availability of the memory. The non-contiguous
memory allocation has an advantage of reducing memory wastage but,
it increases the overheads of address translation. The process is placed in a different
location in memory, it slows the execution of the memory because time is consumed
in address translation.
Contiguous versus Non-contiguous Storage allocation

Contiguous Memory Allocation Non-Contiguous Memory Allocation

The non-Contiguous Memory allocation


The contiguous Memory Allocation
technique divides the process into
technique allocates one single
several blocks and then places them in
contiguous block of memory to the
the different address space of the
process and memory is allocated to the
memory that is memory is allocated to the
process in a continuous fashion.
process in a non-contiguous fashion.

In this Allocation scheme, there is no While in this scheme, there is overhead in


overhead in the address translation the address translation while the
while the execution of the process. execution of the process.

In Non-contiguous Memory allocation


In Contiguous Memory Allocation, the
execution of the process is slow as the
process executes faster because the
process is in different locations of the
whole process is in a sequential block.
memory.

Contiguous Memory Allocation is The non-Contiguous Memory Allocation


easier for the Operating System to scheme is difficult for the Operating
control. System to control.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Contiguous Memory Allocation Non-Contiguous Memory Allocation

In this scheme, the process is divided into


In this, the memory space is divided
several blocks and then these blocks are
into fixed-sized partitions and each
placed in different parts of the memory
partition is allocated only to a single
according to the availability of memory
process.
space.

Contiguous memory allocation


Non-Contiguous memory allocation
includes single partition allocation
includes Paging and Segmentation.
and multi-partition allocation.

In this type of memory allocation


In this type of memory allocation,
generally, a table has to be maintained
generally, a table is maintained by the
for each process that mainly carries the
operating system that maintains the list
base addresses of each block that has
of all available and occupied
been acquired by a process in the
partitions in the memory space.
memory.

There is wastage of memory in There is no wastage of memory in Non-


Contiguous Memory allocation. Contiguous Memory allocation.

In this type of allocation, swapped-in In this type of allocation, swapped-in


processes are arranged in the originally processes can be arranged in any place in
allocated space. the memory.

SINGLE USER CONTIGUOUS STORAGE ALLOCATION


Early computer systems allowed only one person at a time to use a machine. All the
machine's resources were dedicated to that user. Billing was straightforward—the
user was charged for all the resources whether or not the user's job required them. In
fact, the normal billing mechanisms were based on wall clock time. The system
operator gave the user machine for some time interval and charged a flat hourly rate.
The programmer wrote all the code necessary to implement a particular application,
including the highly detailed machine-level input/output instructions.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

The memory organization for a typical single-user contiguous memory allocation system

System designers consolidated input/output coding that implemented basic


functions into an input/output control system (IOCS). The programmer called IOCS
routines (procedures) to do the work instead of having to "reinvent the wheel" for each
program. The IOCS greatly simplified and expedited (advanced) the coding process.
The implementation of input/output control systems may have been the beginning of
today's concept of operating systems.
Advantages and Disadvantages of Single Contiguous Allocation
Advantages
1. Simple Allocation
2. Entire Scheme requires less memory
3. Easy to implement and use
Disadvantages
1. Memory is not fully utilized
2. Processor (CPU) is also not fully utilized
3. User program is being limited to the size available in the main memory
OVERLAYS
How contiguous memory allocation limited the size of programs that could execute
on a system? One way in which a software designer could overcome the memory
limitation was to create overlays, which allowed the system to execute programs
larger than main memory.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Overlay Structure

The programmer divides the program into logical sections. When the program does
not need the memory for one section. The system can replace some or all of it with the
memory for a needed section. Overlays enable programmers to "extend" main
memory. However, manual overlay requires careful and time-consuming planning.
The programmer often must have detailed knowledge of the system's memory
organization. A program with a sophisticated overlay structure can be difficult to
modify. Indeed, as programs grew in complexity, by some estimates as much as 40
percent of programming expense were for organizing overlays. It became clear that
the operating system needed to insulate the programmer from complex memory
management tasks such as overlays.
Protection in a Single-User System
A process can interfere with the operating system's memory - either intentionally or
inadvertently (mistake) -by replacing some or all of its memory contents with other
data. If it destroys the operating system, then the process cannot proceed. If the
process attempts to access memory occupied by the operating system.
Boundary register

The user can detect the problem, terminate execution, possibly fix the problem and re-
launch the program. Protection in single-user contiguous memory allocation systems
can be implemented with a single boundary register built into the processor.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Memory protection with single-user contiguous memory allocation

The boundary register contains the memory address at which the user's program
begins. Each time a process references a memory address, the system determines if the
request is for an address greater than or equal to that stored in the boundary register.
The hardware that checks boundary addresses operates quickly to avoid slowing
instruction execution. The single boundary register represents a simple protection
mechanism.

Single-Stream Batch Processing

Early single-user real memory systems were dedicated to one job for more than the
job's execution time. Jobs generally required considerable setup time during which the
operating system was loaded tapes and disk packs were mounted. When jobs
completed, they required considerable teardown time as tapes and disk packs were
removed. Designers realized that if they could automate various aspects of job-to-job
transition. It could reduce considerably the amount of time wasted between jobs. This
led to the development of batch-processing systems.

In single stream batch processing, jobs are grouped in batches by loading them
consecutively onto tape or disk. A job stream processor reads the job control language
statements and facilitates the setup of the next job. Batch-processing systems greatly
improved resource utilization and helped demonstrate the real value of operating
systems and intensive resource management. Single-stream batch-processing systems
were the state of the art in the early 1960s.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

REAL MEMORY MANAGEMENT TECHNIQUES

The main memory has to accommodate both the operating system and user space.
Now, here the user space has to accommodate various user processes. We also want
these several user processes must reside in the main memory at the same time.

 Fixed/Static Partitioning

 Variable/Dynamic Partitioning

 Simple/Basic Paging

 Simple/Basic Segmentation

Fixed partition multiprogramming

Even with batch-processing operating systems, single-user systems still waste a


considerable amount of the computing resource. The program consumes the CPU
resource until an input or output is needed. When the I/O request is issued, the job
often cannot continue until the requested data is either send or received. Input and
output speeds are extremely slow compared with CPU speeds. Increase the utilization
of the CPU by intensive management. This time chose to implement
multiprogramming systems. Several users simultaneously compete for system
resources. The job currently waiting for I/O will produce the CPU to another job ready
to calculations if indeed, another job is waiting. Both input/output and CPU
calculations can occur simultaneously.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Advantage of Multiprogramming

It is necessary for several jobs to reside in the computer’s main storage at once. When
one job requests input/output, the CPU may be immediately switched to another and
may do calculations without delay. Multiprogramming requires considerably more
storage than a single user system. The improved resource use for the CPU. The
peripheral devices more than justifies the expense of additional storage.

Fixed Partition Multiprogramming: Absolute Translation and Loading

Fixed partition multiprogramming in which main storage was divided into a number
of fixed-size partitions. Each partition holds a single job. The CPU was switched
rapidly between users to create the illusion of simultaneity.

Jobs were translated with absolute assemblers and compilers to run only in a specific
partition. Job was ready to run and its partition was occupied. Then that job had to
wait, even if other partitions were available. This resulted in waste of the storage
resource. But the OS was relatively straightforward to implement.

Memory waste under fixed partition multiprogramming with absolute translation and loading

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

An extreme example of poor storage utilization in fixed partition multiprogramming


with absolute translation and loading. Jobs waiting for partition 3 are small and could
“fit” in the other partitions. But with absolute translation and loading, these jobs may
run only in partition 3. The other two partitions remain empty.

Fixed partition multiprogramming: relocatable translation and loading

Relocating compilers, assemblers and loaders are used to produce relocatable


programs. It can run in any available partition that is large enough to hold them. This
scheme eliminates some of the storage waste characteristic in multiprogramming with
absolute translation and loading.

Protection in multiprogramming systems

Allowing Relocation and Transfers between partitions. Protection implemented by


the use of several boundary registers: low and high boundary registers,
or base register with length. Fragmentation occurs if user programs cannot
completely fill a partition - wasteful.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Fragmentation in fixed partition multiprogramming

Storage fragmentation occurs in every computer system. In fixed partition


multiprogramming systems, fragmentation occurs. Either user jobs do not completely
fill their designed partitions. A partition remains unused if it is too small to hold a
waiting job. Consider the warehouse example, multiple jobs of different types
(perhaps size) entering storage in different partitions. Several users simultaneously
compete for system resources. Switch between I/O jobs and calculation jobs for
instance. To take advantage of this sharing of CPU, important for many jobs to be
present in main memory.

Internal fragmentation in a fixed partition multiprogramming system

VARIABLE PARTITION MULTIPROGRAMMING

System designers found fixed partitions too respective. It decided that an obvious
improvement, to allow jobs to occupy as much storage needed. No fixed boundaries
would be observed. Instead, jobs would be given as much storage as they required is
called variable partition multiprogramming.

In variable partition multiprogramming the jobs arrive, the scheduling mechanisms


decide for proceed. They are given much storage as they need. There is no wastage a
job partition is exactly the size of the job. Every storage organization scheme involves
some degree of waste.

In variable partition multiprogramming, the waste does not become obvious until jobs
start to finish. Leave holes in the main storage. These holes can be used for other jobs.
These remaining holes get smaller eventually becoming too small to hold new jobs.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Initial partition assignment in variable partition multiprogramming

Storage “holes” in variable partition multiprogramming

Variable partition multiprogramming characteristics

 Coalescing holes

 Storage compaction

 Storage placement strategies

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Coalescing holes

Job finishes in variable partition multiprogramming system, check whether the


storage being freed (unrestricted) borders on other free storage areas (holes). The free
storage list,

❖ An additional hole.
❖ A single hole reflecting the merger of the existing hole.
❖ New adjacent hole.

The process of merging adjacent holes to form a single larger hole in called coalescing.

Storage compaction

Storage compaction in variable partition multiprogramming

When a job requires a certain amount of main storage no individual hole is large
enough to hold the job. Even though the sum of all the holes is larger than the storage
needed by the new job. The technique of storage compaction involves moving all

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

occupied areas of storage to one end or the other of main storage. Rearranges memory
into a single contiguous block free space. A single contiguous block of occupied space.
It is also referred as burping the storage or garbage collection.

Storage placement strategies

1. Best fit strategy


An incoming job is placed in the hole where it best fits (i.e., the amount of free
space left is minimal)
2. First fit strategy
Placed in the first available slot large enough to hold the job.
3. Worst fit strategy
Place in storage in the largest slot available. The remaining may still be large
enough to hold another job.

First fit strategy

This method keeps the free/busy list of jobs organized by memory location, low-
ordered to high-ordered memory. In this method, first job claims the first available
memory with space more than or equal to its size. The operating system doesn’t search
for appropriate partition but just allocate the job to the nearest memory partition
available with sufficient size.

Downloaded by Pavithra Chinnappan ([email protected])


lOMoARcPSD|38241633

Best fit strategy

This method keeps the free/busy list in order by size – smallest to largest. In this
method, the operating system first searches the whole of the memory according to the
size of the given job and allocates it to the closest-fitting free partition in the memory,
making it able to use memory efficiently. Here the jobs are in the order from smallest
job to largest job.

Worst fit strategy


In this allocation technique, the process traverses the whole memory and always
search for the largest hole/partition, and then the process is placed in that
hole/partition. It is a slow process because it has to traverse the entire memory to
search the largest hole.

Downloaded by Pavithra Chinnappan ([email protected])

You might also like