0% found this document useful (0 votes)
13 views30 pages

Memory Management

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views30 pages

Memory Management

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

CS1208

Operating Systems

Memory Management

Bachelor of Science (Hons) in Computer Science | Software Engineering | Information Technology


Department of Computing
Faculty of Computing and Technology
Saegis Campus
Nugegoda
Learning outcomes

• After the completion of this lesson, you will be able to explain different memory
management requirements, the concept behind swapping plus virtual memory,
applicability of paging as well as different segmentation methods.
Therefore, the lesson would enable you to,
• List different memory management requirements
• Explain what is meant by swapping
•Explain what is meant by virtual memory, paging and
segmentation.
Basic Memory Management

Memory is an important resource that must be carefully managed.


The memory available in average computers has increased
nowadays, but the sizes of the programs are growing as well.
Therefore careful memory management is of paramount importance.
• The part of the operating system that manages the memory is called
the memory manager.
The functions of the memory manager is
• Keep track of the parts of memory in use and not in use.
• Allocate memory to processes.
• De-allocate memory from processors when it is not needed.
• Manage swapping between main memory and disk when
required.
Basic Memory Management

Different types of memory are available in a computer. This is


known as memory hierarchy. Depending on different
functions, different types of memory are used. This is done by
the memory manager.
Memory hierarchy consists of
• Small amount of very fast, expensive volatile cache memory,
• Tens of megabytes of media speed, medium price, volatile
main memory (RAM),
• Tens or hundreds of gigabytes of slow, cheap, non volatile
disk storage
Basic Memory Management

Uni-programming system
• In a uni-programming system only a single program is executing
at a given time and therefore the memory is separated on to two
parts; one for the operating system and the other part for the
program.

Multiprogramming system
• But on a multiprogramming environment, multiple programs are
executing at a given time. Therefore the memory manage must
dynamically allocate memory to accommodate different programs.
It has to ensure that a reasonable supply of ready processes is
available to make the maximum use of the processor time.
Memory management requirements

Relocation
• In a multiprogramming environment the memory is shared among a number of
processes. During program execution it might be necessary to swap these
processes between the main memory and the disk.
• Once a process is reloaded into the memory we can not guarantee that it will be
loaded into the original location of the memory from where it was swapped out.
• The OS must be aware of the location of process control information, the
execution stack and the entry point of the program.
• Furthermore the OS must be able to handle the memory references within the
program. In order to handle all these requirements, there must be a possibility
of mapping the memory references within the program into actual physical
memory addresses.
• This mapping of addresses, depending on where the process is loaded on
memory is known as relocation.
Memory management requirements

Protection
• In a multiprocessor environment each process must be protected
against intentional or unintentional interferences by other programs.
All memory references generated by a process must be validated to
ensure that they only refer to the memory space allocated for the
given process.

Sharing
• Memory management system must allow controlled access to
shared areas of memory, without violating protection. As an example
if the same program is executing on a number of processes, all
these processes can share the same copy of the program.
Memory management requirements

Logical organization
• On a physical level memory is organized as a linear, one dimensional address
space. Most programs are generally designed as modules. The OS and
computer hardware must be able to deal with user programs and data in a
modular format. The memory could also be logically organized in a modular
layout to handle this successfully (e.g. segmentation).

Physical organization
• Computer memory could be mainly categorized as main memory and
secondary memory. Secondary memory is used for the long term storage of
programs and data, while main memory holds programs and data currently
being used. Due to the large size of the programs and the limited size of
memory, the swapping of information between the main and secondary
memory is a major concern. The memory management system must ensure
that this flow of information is handled efficiently depending on the physical
memory constraints.
Mono-programming

• The simplest form of memory management is to run a single


process at a time; this is known as mono-programming. In this
pattern only one program can be executed at a particular
moment within the main memory. Therefore multiprogramming
cannot be achieved on those days.
• When multiprogramming is used and multiple processes are
executing at a given instance it should be possible to keep each
of these processes in different areas in the memory.
• The easiest way to do this is to divide the memory into different
partitions and load different processes into different partitions.
The partitions could either be fixed or could be dynamically
changed.
Memory partitioning
Fixed partitioning
When fixed partitioning is used, a fixed portion of the main
memory is used by the operating system and the
remaining memory is portioned into areas with fixed
boundaries.
Processes are loaded into these partitions.
Fixed partitioning scheme is relatively simple with less
processing overhead and requires minimal operating
system software.
Memory partitioning

• There are some disadvantages associated with fixed partitioning.


– The number of active processes in the system is limited by the
number of partitions.
– Irrespective of the size of the process, each of the processes
will occupy a whole partition. When the processes are smaller
than the partition, this will lead to wasted space within the
partitions. This is known as internal fragmentation. Therefore
the memory is not utilized efficiently. In contrast a process
might be larger than a partition. In this instance it is not
possible to load the entire process to a partition. Therefore the
process is separated into separate portions called overlays,
which could be loaded into a partition.
Fixed partitioning could be implemented in
two ways
• Equal size partitions
When equal size partitions are used any process that is smaller
sized than a partition could be loaded into any free partition. If a
process needs to execute when the partitions are full, the OS
has to search for a process that is not running or in ready state
and swap it out of the partition and load the new process in.

• Unequal size partitions


When unequal size partitions are used different algorithms must
be used to find an appropriate partition. The easiest method is to
assign the smallest partition to which it would fit, enabling to
minimize internal fragmentation. When this method is used a
scheduling queue is maintained for each partition. Depending on
different circumstances this method could be inefficient.
Another possibility would be to maintain a single queue for
all processes. When loading a process into memory, the
smallest available partition that will hold the process is
selected.
If none of the partitions are free another process will have
to be swapped out. While preference is giving for selecting
the smallest sized partition that will hold the process,
consideration is also given for the priority of the process as
well.
Generally blocked processes are selected for swapping
rather than ready processes.
As an example, at a given instance if most of the available
processes are smaller sized they would be queuing on
smaller sized partitions and larger sized partitions will
remain unused

Fixed memory partitions with separate input queues


Source: Modern Operating Systems by Andrew S. Tanenbaum
Fixed memory partitions with single input queues
Source: Modern Operating Systems by Andrew S. Tanenbaum
Memory partitioning

Dynamic partitioning
• By enabling to allocate partitions dynamically the
limitations in fixed partitioning could be lessoned. In this
method the partition size and the number could vary
depending on the requirements.
• Whenever a process is brought into memory it is allocated
the exact amount of memory it needs. When this method is
used more sophisticated systems must be used for
calculating the best area in memory that a process should
be loaded into.
Swapping
On multiprocessor systems sometimes there might not be
enough main memory to hold all the currently active processes.
Therefore as a solution the excess processes are kept on the
disk and are loaded into memory dynamically.
The simplest form of memory management is called swapping.
In this method the whole process is loaded into memory and
executed. Afterwards the whole process would be put back on
to disk.

Change in memory allocation when process comes Source: Modern Operating Systems by
Andrew S. Tanenbaum
Swapping
As an example according to the above diagram at the initial stage only
process A is loaded into memory. Next process B and C are also created.
Next A is swapped out of memory and D is swapped in its place. This
crates a hole at the end of the memory that is too small to hold another
process.
This is known as external fragmentation. Later B is also swapped out
leaving adequate room to swap A back in. But in this instance A is loaded
into different areas of the memory than it was previously allocated.
Therefore the addresses contained in it must be reallocated.
One technique used for overcoming external fragmentation is compaction.
In this method separate holes in memory are combined together by moving
all the processes downward as far as possible.
But this is a time consuming procedure and is considered as wasteful of
processor time.
Swapping
Generally processes are not fixed sized and could grow dynamically. If a
process is adjacent to a hole, this free memory could be allocated to the
memory as well. If not the process will have to be reallocated to a
different memory location or a number of other processes will have to be
swapped out to make adequate memory for the growing process. If
memory cannot be allocated in any of these methods the process will
have to go to wait state or be killed.
In order to reduce the overhead associated with the above situation,
processes are assigned with additional memory that they can grow into.
This could be handled in two ways (See below diagram).
• Allocate additional memory at the end of the process as in figure (a) or if
a process has two growing segments as in figure (b) such as the data
segment and the stack, additional memory could be allocated in between
these two areas so that either of them could make use of the additional
memory.
Swapping

(a) Allocating space for growing data segment (b) allocate space for growing stack
and growing data segment
Source: Modern Operating Systems by Andrew S. Tanenbaum
Swapping
There are two ways of keeping track of memory allocated
dynamically; bit maps and linked lists.
Memory management with bitmaps.
In this method the memory is divided into allocation units.
Corresponding to each allocation unit is a bit in the bitmap. In
the bit map, 0 is used to indicate free allocation units, while 1
represents used allocation units.

(a) A part of the memory with five processes and three holes (b) The corresponding bitmap,
indicating the status of each allocation unit. Source: Modern Operating Systems by Andrew S.
Tanenbaum
Swapping
Memory management with linked lists
In this method a linked list of allocated and free memory
segments is maintained. Each entry in the list specifies a hole
(H) or a process (P), the address at which it starts, the length
and a pointer to the next entry.

(a) A part of the memory with five processes and three holes (b)The availability of
memory locations represented as a linked list of segments. Source: Modern Operating
Systems by Andrew S. Tanenbaum
Swapping
When a process is terminated or swapped out and when processes are
created or swapped in, the list must be updated.
Several algorithms can be used for allocating memory for processes.

First Fit:
The memory manager scans the list starting from the beginning, until it
finds a hole that is big enough to hold the process. This is a fast
algorithm since it searches as little as possible. But leads to large sized
holes in the memory.
Next Fit:
Similar to the first fit, it searches for the first hole that is large enough.
But unlike in the first fit, the memory manager starts searching the list
from the location where it stopped searching last time. Therefore this
leads to worse performance than first fit.
Swapping
Best Fit:
The entire list is searched to find the smallest hole that can
accommodate the process. This method is very slow and
leads to small sized, useless holes in memory.

Worst Fit:
Takes the largest available hole, assuming the remaining
will be big enough to be useful. Cannot be considered as a
very successful solution.
Virtual Memory

Virtual memory enables the execution of processes that


are larger than the size of actual physical memory.
For example the combined size of the program, stack and
the data will exceed the size of the actual physical memory
available.
The operating system keeps the currently executing parts
in the memory and the remainder on the disk.
Different parts of the program are swapped between
memory and disk as required.
Virtual Memory

Many advantages could be obtained by using virtual


memory
• More processes can be maintained in the memory.
• Enables the execution of processes that require more memory than that is avail
in main memory.

When virtual memory is used the task of separating the program into p
handled by the operating system and the hardware, thereby freeing the progr
from an additional burden.
Paging
• A process has the ability of generating memory addresses;
these are known as virtual addresses and form the virtual
address space.
• The virtual address space is divided into units called pages. The
corresponding units in the physical memory are called page
frames.
• When virtual addresses are generated they are sent to the
Memory Management Unit (MMU) that maps the virtual
addresses to physical memory addresses.
Paging

The position and function of the MMU


Source: Modern Operating Systems by Andrew S. Tanenbaum
Segmentation
Segmentation was introduced in an attempt to allow programs and
data to be separately loaded into logically independent address spaces
and to aid sharing and protection.
A segment is a logical address space consisting of a linear sequence
of addresses, from 0 to max.
It has the capability of growing or shrinking dynamically without
affecting other segments. Different segments are made up of different
lengths.
A single segment only contains a single type of data in a given
instance. As an example a program may consist of procedures, an
array, a stack or a collection of variables. In this instance each of these
will be contained in different segments. Unlike virtual memory, which is
one dimensional this is considered as two dimensional memory.
Ashen Gunawardena
MSc in Information Technology (Reading ),MSc in Digital Marketing(Reading), MSc in
Network and Information Security, BEng (Hons) Computer Networking, Certificate in
Cyber Security, HDCS, CCSK, CCNP, CCNA, PCAP, CAP, CPP, NDG Linux Essentials.

Email :
[email protected]

Linkedin :
https://lk.linkedin.com/in/ashen
gunawardena

0768530641

Saegis Campus

You might also like